00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2025 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3290 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.017 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.034 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.051 Using shallow fetch with depth 1 00:00:00.051 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.051 > git --version # timeout=10 00:00:00.070 > git --version # 'git version 2.39.2' 00:00:00.070 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.087 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.087 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.148 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.161 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.173 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:02.173 > git config core.sparsecheckout # timeout=10 00:00:02.185 > git read-tree -mu HEAD # timeout=10 00:00:02.202 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:02.237 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:02.237 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:02.463 [Pipeline] Start of Pipeline 00:00:02.476 [Pipeline] library 00:00:02.477 Loading library shm_lib@master 00:00:02.477 Library shm_lib@master is cached. Copying from home. 00:00:02.491 [Pipeline] node 00:00:02.509 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:02.510 [Pipeline] { 00:00:02.518 [Pipeline] catchError 00:00:02.519 [Pipeline] { 00:00:02.530 [Pipeline] wrap 00:00:02.541 [Pipeline] { 00:00:02.549 [Pipeline] stage 00:00:02.551 [Pipeline] { (Prologue) 00:00:02.567 [Pipeline] echo 00:00:02.568 Node: VM-host-SM4 00:00:02.572 [Pipeline] cleanWs 00:00:02.580 [WS-CLEANUP] Deleting project workspace... 00:00:02.580 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.586 [WS-CLEANUP] done 00:00:02.748 [Pipeline] setCustomBuildProperty 00:00:02.809 [Pipeline] httpRequest 00:00:02.832 [Pipeline] echo 00:00:02.833 Sorcerer 10.211.164.101 is alive 00:00:02.840 [Pipeline] httpRequest 00:00:02.843 HttpMethod: GET 00:00:02.844 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:02.844 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:02.845 Response Code: HTTP/1.1 200 OK 00:00:02.845 Success: Status code 200 is in the accepted range: 200,404 00:00:02.846 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:02.989 [Pipeline] sh 00:00:03.269 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:03.284 [Pipeline] httpRequest 00:00:03.295 [Pipeline] echo 00:00:03.297 Sorcerer 10.211.164.101 is alive 00:00:03.302 [Pipeline] httpRequest 00:00:03.306 HttpMethod: GET 00:00:03.307 URL: http://10.211.164.101/packages/spdk_b8378f94e02ef4dd21e7023626f6c3b47a36f5c1.tar.gz 00:00:03.307 Sending request to url: http://10.211.164.101/packages/spdk_b8378f94e02ef4dd21e7023626f6c3b47a36f5c1.tar.gz 00:00:03.308 Response Code: HTTP/1.1 200 OK 00:00:03.309 Success: Status code 200 is in the accepted range: 200,404 00:00:03.309 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_b8378f94e02ef4dd21e7023626f6c3b47a36f5c1.tar.gz 00:00:17.623 [Pipeline] sh 00:00:17.905 + tar --no-same-owner -xf spdk_b8378f94e02ef4dd21e7023626f6c3b47a36f5c1.tar.gz 00:00:20.451 [Pipeline] sh 00:00:20.731 + git -C spdk log --oneline -n5 00:00:20.732 b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:00:20.732 c2a77f51e module/bdev/nvme: add detach-monitor poller 00:00:20.732 e14876e17 lib/nvme: add spdk_nvme_scan_attached() 00:00:20.732 1d6dfcbeb nvme_pci: ctrlr_scan_attached callback 00:00:20.732 ff6594986 nvme_transport: optional callback to scan attached 00:00:20.755 [Pipeline] withCredentials 00:00:20.767 > git --version # timeout=10 00:00:20.780 > git --version # 'git version 2.39.2' 00:00:20.798 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:20.801 [Pipeline] { 00:00:20.810 [Pipeline] retry 00:00:20.812 [Pipeline] { 00:00:20.830 [Pipeline] sh 00:00:21.114 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:21.386 [Pipeline] } 00:00:21.409 [Pipeline] // retry 00:00:21.415 [Pipeline] } 00:00:21.436 [Pipeline] // withCredentials 00:00:21.446 [Pipeline] httpRequest 00:00:21.478 [Pipeline] echo 00:00:21.480 Sorcerer 10.211.164.101 is alive 00:00:21.489 [Pipeline] httpRequest 00:00:21.494 HttpMethod: GET 00:00:21.494 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:21.495 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:21.502 Response Code: HTTP/1.1 200 OK 00:00:21.503 Success: Status code 200 is in the accepted range: 200,404 00:00:21.503 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.778 [Pipeline] sh 00:01:19.059 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.444 [Pipeline] sh 00:01:20.725 + git -C dpdk log --oneline -n5 00:01:20.725 caf0f5d395 version: 22.11.4 00:01:20.725 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:20.725 dc9c799c7d vhost: fix missing spinlock unlock 00:01:20.725 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:20.725 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:20.743 [Pipeline] writeFile 00:01:20.760 [Pipeline] sh 00:01:21.075 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:21.087 [Pipeline] sh 00:01:21.368 + cat autorun-spdk.conf 00:01:21.368 SPDK_TEST_UNITTEST=1 00:01:21.368 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.368 SPDK_TEST_NVME=1 00:01:21.368 SPDK_TEST_BLOCKDEV=1 00:01:21.368 SPDK_RUN_ASAN=1 00:01:21.368 SPDK_RUN_UBSAN=1 00:01:21.368 SPDK_TEST_RAID5=1 00:01:21.368 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:21.368 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:21.368 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.375 RUN_NIGHTLY=1 00:01:21.377 [Pipeline] } 00:01:21.395 [Pipeline] // stage 00:01:21.412 [Pipeline] stage 00:01:21.414 [Pipeline] { (Run VM) 00:01:21.431 [Pipeline] sh 00:01:21.713 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:21.713 + echo 'Start stage prepare_nvme.sh' 00:01:21.713 Start stage prepare_nvme.sh 00:01:21.713 + [[ -n 6 ]] 00:01:21.713 + disk_prefix=ex6 00:01:21.713 + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]] 00:01:21.713 + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]] 00:01:21.713 + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf 00:01:21.713 ++ SPDK_TEST_UNITTEST=1 00:01:21.713 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.713 ++ SPDK_TEST_NVME=1 00:01:21.713 ++ SPDK_TEST_BLOCKDEV=1 00:01:21.713 ++ SPDK_RUN_ASAN=1 00:01:21.713 ++ SPDK_RUN_UBSAN=1 00:01:21.713 ++ SPDK_TEST_RAID5=1 00:01:21.713 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:21.713 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:21.713 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.713 ++ RUN_NIGHTLY=1 00:01:21.713 + cd /var/jenkins/workspace/ubuntu24-vg-autotest 00:01:21.713 + nvme_files=() 00:01:21.713 + declare -A nvme_files 00:01:21.713 + backend_dir=/var/lib/libvirt/images/backends 00:01:21.713 + nvme_files['nvme.img']=5G 00:01:21.713 + nvme_files['nvme-cmb.img']=5G 00:01:21.713 + nvme_files['nvme-multi0.img']=4G 00:01:21.713 + nvme_files['nvme-multi1.img']=4G 00:01:21.713 + nvme_files['nvme-multi2.img']=4G 00:01:21.713 + nvme_files['nvme-openstack.img']=8G 00:01:21.713 + nvme_files['nvme-zns.img']=5G 00:01:21.713 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:21.713 + (( SPDK_TEST_FTL == 1 )) 00:01:21.713 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:21.713 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:21.713 + for nvme in "${!nvme_files[@]}" 00:01:21.713 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:21.713 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.713 + for nvme in "${!nvme_files[@]}" 00:01:21.713 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:21.972 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.972 + for nvme in "${!nvme_files[@]}" 00:01:21.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:21.972 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:21.972 + for nvme in "${!nvme_files[@]}" 00:01:21.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:21.972 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.972 + for nvme in "${!nvme_files[@]}" 00:01:21.973 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:22.231 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.231 + for nvme in "${!nvme_files[@]}" 00:01:22.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:22.231 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.231 + for nvme in "${!nvme_files[@]}" 00:01:22.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:22.797 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.056 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:23.056 + echo 'End stage prepare_nvme.sh' 00:01:23.056 End stage prepare_nvme.sh 00:01:23.067 [Pipeline] sh 00:01:23.347 + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:23.347 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -H -a -v -f ubuntu2404 00:01:23.347 00:01:23.347 DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant 00:01:23.347 SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk 00:01:23.347 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest 00:01:23.347 HELP=0 00:01:23.347 DRY_RUN=0 00:01:23.347 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img, 00:01:23.347 NVME_DISKS_TYPE=nvme, 00:01:23.347 NVME_AUTO_CREATE=0 00:01:23.347 NVME_DISKS_NAMESPACES=, 00:01:23.347 NVME_CMB=, 00:01:23.347 NVME_PMR=, 00:01:23.347 NVME_ZNS=, 00:01:23.347 NVME_MS=, 00:01:23.347 NVME_FDP=, 00:01:23.347 SPDK_VAGRANT_DISTRO=ubuntu2404 00:01:23.347 SPDK_VAGRANT_VMCPU=10 00:01:23.347 SPDK_VAGRANT_VMRAM=12288 00:01:23.347 SPDK_VAGRANT_PROVIDER=libvirt 00:01:23.347 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:23.347 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:23.347 SPDK_OPENSTACK_NETWORK=0 00:01:23.347 VAGRANT_PACKAGE_BOX=0 00:01:23.347 VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:23.347 FORCE_DISTRO=true 00:01:23.347 VAGRANT_BOX_VERSION= 00:01:23.347 EXTRA_VAGRANTFILES= 00:01:23.347 NIC_MODEL=e1000 00:01:23.347 00:01:23.347 mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt' 00:01:23.347 /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest 00:01:26.636 Bringing machine 'default' up with 'libvirt' provider... 00:01:27.205 ==> default: Creating image (snapshot of base box volume). 00:01:27.205 ==> default: Creating domain with the following settings... 00:01:27.205 ==> default: -- Name: ubuntu2404-24.04-1720510786-2314_default_1721746522_b91ee5fb0f834b5e9562 00:01:27.205 ==> default: -- Domain type: kvm 00:01:27.205 ==> default: -- Cpus: 10 00:01:27.205 ==> default: -- Feature: acpi 00:01:27.205 ==> default: -- Feature: apic 00:01:27.205 ==> default: -- Feature: pae 00:01:27.205 ==> default: -- Memory: 12288M 00:01:27.205 ==> default: -- Memory Backing: hugepages: 00:01:27.205 ==> default: -- Management MAC: 00:01:27.205 ==> default: -- Loader: 00:01:27.205 ==> default: -- Nvram: 00:01:27.205 ==> default: -- Base box: spdk/ubuntu2404 00:01:27.205 ==> default: -- Storage pool: default 00:01:27.205 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1721746522_b91ee5fb0f834b5e9562.img (20G) 00:01:27.205 ==> default: -- Volume Cache: default 00:01:27.205 ==> default: -- Kernel: 00:01:27.205 ==> default: -- Initrd: 00:01:27.205 ==> default: -- Graphics Type: vnc 00:01:27.205 ==> default: -- Graphics Port: -1 00:01:27.205 ==> default: -- Graphics IP: 127.0.0.1 00:01:27.205 ==> default: -- Graphics Password: Not defined 00:01:27.205 ==> default: -- Video Type: cirrus 00:01:27.205 ==> default: -- Video VRAM: 9216 00:01:27.205 ==> default: -- Sound Type: 00:01:27.205 ==> default: -- Keymap: en-us 00:01:27.205 ==> default: -- TPM Path: 00:01:27.205 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:27.205 ==> default: -- Command line args: 00:01:27.205 ==> default: -> value=-device, 00:01:27.205 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:27.205 ==> default: -> value=-drive, 00:01:27.205 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:27.205 ==> default: -> value=-device, 00:01:27.205 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.205 ==> default: Creating shared folders metadata... 00:01:27.205 ==> default: Starting domain. 00:01:28.581 ==> default: Waiting for domain to get an IP address... 00:01:40.809 ==> default: Waiting for SSH to become available... 00:01:40.809 ==> default: Configuring and enabling network interfaces... 00:01:46.080 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.345 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:55.540 ==> default: Mounting SSHFS shared folder... 00:01:56.916 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output 00:01:56.916 ==> default: Checking Mount.. 00:01:57.483 ==> default: Folder Successfully Mounted! 00:01:57.483 ==> default: Running provisioner: file... 00:01:58.051 default: ~/.gitconfig => .gitconfig 00:01:58.310 00:01:58.310 SUCCESS! 00:01:58.310 00:01:58.310 cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use. 00:01:58.310 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:58.310 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm. 00:01:58.310 00:01:58.318 [Pipeline] } 00:01:58.338 [Pipeline] // stage 00:01:58.347 [Pipeline] dir 00:01:58.348 Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt 00:01:58.350 [Pipeline] { 00:01:58.364 [Pipeline] catchError 00:01:58.366 [Pipeline] { 00:01:58.381 [Pipeline] sh 00:01:58.662 + vagrant ssh-config --host vagrant 00:01:58.662 + sed -ne /^Host/,$p 00:01:58.662 + tee ssh_conf 00:02:02.002 Host vagrant 00:02:02.002 HostName 192.168.121.110 00:02:02.002 User vagrant 00:02:02.002 Port 22 00:02:02.002 UserKnownHostsFile /dev/null 00:02:02.002 StrictHostKeyChecking no 00:02:02.002 PasswordAuthentication no 00:02:02.002 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404 00:02:02.002 IdentitiesOnly yes 00:02:02.002 LogLevel FATAL 00:02:02.002 ForwardAgent yes 00:02:02.002 ForwardX11 yes 00:02:02.002 00:02:02.016 [Pipeline] withEnv 00:02:02.019 [Pipeline] { 00:02:02.033 [Pipeline] sh 00:02:02.315 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:02.315 source /etc/os-release 00:02:02.315 [[ -e /image.version ]] && img=$(< /image.version) 00:02:02.315 # Minimal, systemd-like check. 00:02:02.315 if [[ -e /.dockerenv ]]; then 00:02:02.315 # Clear garbage from the node's name: 00:02:02.315 # agt-er_autotest_547-896 -> autotest_547-896 00:02:02.315 # $HOSTNAME is the actual container id 00:02:02.315 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:02.315 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:02.315 # We can assume this is a mount from a host where container is running, 00:02:02.315 # so fetch its hostname to easily identify the target swarm worker. 00:02:02.315 container="$(< /etc/hostname) ($agent)" 00:02:02.315 else 00:02:02.315 # Fallback 00:02:02.315 container=$agent 00:02:02.315 fi 00:02:02.315 fi 00:02:02.315 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:02.315 00:02:02.586 [Pipeline] } 00:02:02.613 [Pipeline] // withEnv 00:02:02.630 [Pipeline] setCustomBuildProperty 00:02:02.664 [Pipeline] stage 00:02:02.668 [Pipeline] { (Tests) 00:02:02.685 [Pipeline] sh 00:02:02.959 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:03.231 [Pipeline] sh 00:02:03.513 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:03.797 [Pipeline] timeout 00:02:03.797 Timeout set to expire in 1 hr 30 min 00:02:03.799 [Pipeline] { 00:02:03.815 [Pipeline] sh 00:02:04.094 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:04.662 HEAD is now at b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:02:04.674 [Pipeline] sh 00:02:04.953 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:05.225 [Pipeline] sh 00:02:05.505 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:05.780 [Pipeline] sh 00:02:06.060 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo 00:02:06.318 ++ readlink -f spdk_repo 00:02:06.318 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:06.318 + [[ -n /home/vagrant/spdk_repo ]] 00:02:06.318 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:06.318 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:06.318 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:06.318 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:06.318 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:06.318 + [[ ubuntu24-vg-autotest == pkgdep-* ]] 00:02:06.318 + cd /home/vagrant/spdk_repo 00:02:06.318 + source /etc/os-release 00:02:06.318 ++ PRETTY_NAME='Ubuntu 24.04 LTS' 00:02:06.318 ++ NAME=Ubuntu 00:02:06.318 ++ VERSION_ID=24.04 00:02:06.318 ++ VERSION='24.04 LTS (Noble Numbat)' 00:02:06.318 ++ VERSION_CODENAME=noble 00:02:06.318 ++ ID=ubuntu 00:02:06.318 ++ ID_LIKE=debian 00:02:06.318 ++ HOME_URL=https://www.ubuntu.com/ 00:02:06.318 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:06.318 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:06.318 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:06.318 ++ UBUNTU_CODENAME=noble 00:02:06.318 ++ LOGO=ubuntu-logo 00:02:06.318 + uname -a 00:02:06.318 Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:06.318 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:06.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:02:06.577 Hugepages 00:02:06.577 node hugesize free / total 00:02:06.577 node0 1048576kB 0 / 0 00:02:06.577 node0 2048kB 0 / 0 00:02:06.577 00:02:06.577 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.577 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:06.836 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:06.836 + rm -f /tmp/spdk-ld-path 00:02:06.836 + source autorun-spdk.conf 00:02:06.836 ++ SPDK_TEST_UNITTEST=1 00:02:06.836 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.836 ++ SPDK_TEST_NVME=1 00:02:06.836 ++ SPDK_TEST_BLOCKDEV=1 00:02:06.836 ++ SPDK_RUN_ASAN=1 00:02:06.836 ++ SPDK_RUN_UBSAN=1 00:02:06.836 ++ SPDK_TEST_RAID5=1 00:02:06.836 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:06.836 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:06.836 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.836 ++ RUN_NIGHTLY=1 00:02:06.836 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.836 + [[ -n '' ]] 00:02:06.836 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:06.836 + for M in /var/spdk/build-*-manifest.txt 00:02:06.836 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.836 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.836 + for M in /var/spdk/build-*-manifest.txt 00:02:06.836 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.836 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.836 ++ uname 00:02:06.836 + [[ Linux == \L\i\n\u\x ]] 00:02:06.836 + sudo dmesg -T 00:02:06.836 + sudo dmesg --clear 00:02:06.836 + dmesg_pid=2549 00:02:06.836 + sudo dmesg -Tw 00:02:06.836 + [[ Ubuntu == FreeBSD ]] 00:02:06.836 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.836 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.836 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.836 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.836 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.836 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.836 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.836 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:06.836 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:06.836 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:06.836 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.836 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.836 Test configuration: 00:02:06.836 SPDK_TEST_UNITTEST=1 00:02:06.836 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.836 SPDK_TEST_NVME=1 00:02:06.836 SPDK_TEST_BLOCKDEV=1 00:02:06.836 SPDK_RUN_ASAN=1 00:02:06.836 SPDK_RUN_UBSAN=1 00:02:06.836 SPDK_TEST_RAID5=1 00:02:06.836 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:06.836 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:06.836 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.836 RUN_NIGHTLY=1 14:56:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:06.836 14:56:02 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:06.836 14:56:02 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.836 14:56:02 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.836 14:56:02 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:06.836 14:56:02 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:06.836 14:56:02 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:06.836 14:56:02 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:06.836 14:56:02 -- paths/export.sh@6 -- $ export PATH 00:02:06.836 14:56:02 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:06.836 14:56:02 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:07.095 14:56:02 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:07.095 14:56:02 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721746562.XXXXXX 00:02:07.095 14:56:02 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721746562.T2Q2E5 00:02:07.095 14:56:02 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:07.095 14:56:02 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:02:07.095 14:56:02 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:07.095 14:56:02 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:07.095 14:56:02 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:07.095 14:56:02 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.095 14:56:02 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:07.095 14:56:02 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:07.095 14:56:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.096 14:56:02 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:07.096 14:56:02 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:07.096 14:56:02 -- pm/common@17 -- $ local monitor 00:02:07.096 14:56:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.096 14:56:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.096 14:56:02 -- pm/common@21 -- $ date +%s 00:02:07.096 14:56:02 -- pm/common@25 -- $ sleep 1 00:02:07.096 14:56:02 -- pm/common@21 -- $ date +%s 00:02:07.096 14:56:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721746562 00:02:07.096 14:56:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721746562 00:02:07.096 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721746562_collect-vmstat.pm.log 00:02:07.096 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721746562_collect-cpu-load.pm.log 00:02:08.031 14:56:03 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:08.031 14:56:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.031 14:56:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.031 14:56:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.031 14:56:03 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.031 Tue Jul 23 14:56:03 UTC 2024 00:02:08.031 14:56:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.031 v24.09-pre-302-gb8378f94e 00:02:08.031 14:56:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:08.031 14:56:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:08.032 14:56:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:08.032 14:56:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:08.032 14:56:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.032 ************************************ 00:02:08.032 START TEST asan 00:02:08.032 ************************************ 00:02:08.032 using asan 00:02:08.032 14:56:03 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:08.032 00:02:08.032 real 0m0.000s 00:02:08.032 user 0m0.000s 00:02:08.032 sys 0m0.000s 00:02:08.032 14:56:03 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:08.032 ************************************ 00:02:08.032 END TEST asan 00:02:08.032 ************************************ 00:02:08.032 14:56:03 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.032 14:56:03 -- common/autotest_common.sh@1142 -- $ return 0 00:02:08.032 14:56:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.032 14:56:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.032 14:56:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:08.032 14:56:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:08.032 14:56:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.032 ************************************ 00:02:08.032 START TEST ubsan 00:02:08.032 ************************************ 00:02:08.032 using ubsan 00:02:08.032 14:56:03 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:08.032 00:02:08.032 real 0m0.000s 00:02:08.032 user 0m0.000s 00:02:08.032 sys 0m0.000s 00:02:08.032 14:56:03 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:08.032 ************************************ 00:02:08.032 14:56:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.032 END TEST ubsan 00:02:08.032 ************************************ 00:02:08.032 14:56:03 -- common/autotest_common.sh@1142 -- $ return 0 00:02:08.032 14:56:03 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:08.032 14:56:03 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:08.032 14:56:03 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:08.032 14:56:03 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:08.032 14:56:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:08.032 14:56:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.032 ************************************ 00:02:08.032 START TEST build_native_dpdk 00:02:08.032 ************************************ 00:02:08.032 14:56:03 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:08.032 14:56:03 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:08.290 caf0f5d395 version: 22.11.4 00:02:08.290 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:08.290 dc9c799c7d vhost: fix missing spinlock unlock 00:02:08.290 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:08.290 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:08.290 14:56:03 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:08.290 14:56:03 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:08.291 14:56:03 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:08.291 patching file config/rte_config.h 00:02:08.291 Hunk #1 succeeded at 60 (offset 1 line). 00:02:08.291 14:56:03 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:08.291 14:56:03 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:08.291 14:56:03 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:08.291 patching file lib/pcapng/rte_pcapng.c 00:02:08.291 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:08.291 14:56:03 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:08.291 14:56:03 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:08.291 14:56:03 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:08.291 14:56:03 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:08.291 14:56:03 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:13.567 The Meson build system 00:02:13.567 Version: 1.4.1 00:02:13.567 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:13.567 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:13.567 Build type: native build 00:02:13.567 Program cat found: YES (/usr/bin/cat) 00:02:13.567 Project name: DPDK 00:02:13.567 Project version: 22.11.4 00:02:13.567 C compiler for the host machine: gcc (gcc 13.2.0 "gcc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:02:13.567 C linker for the host machine: gcc ld.bfd 2.42 00:02:13.567 Host machine cpu family: x86_64 00:02:13.567 Host machine cpu: x86_64 00:02:13.567 Message: ## Building in Developer Mode ## 00:02:13.567 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.567 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:13.567 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.567 Program objdump found: YES (/usr/bin/objdump) 00:02:13.567 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:02:13.567 Program cat found: YES (/usr/bin/cat) 00:02:13.567 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:13.567 Checking for size of "void *" : 8 00:02:13.567 Checking for size of "void *" : 8 (cached) 00:02:13.567 Library m found: YES 00:02:13.567 Library numa found: YES 00:02:13.567 Has header "numaif.h" : YES 00:02:13.567 Library fdt found: NO 00:02:13.567 Library execinfo found: NO 00:02:13.567 Has header "execinfo.h" : YES 00:02:13.567 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:02:13.567 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.567 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.567 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.567 Run-time dependency openssl found: YES 3.0.13 00:02:13.567 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:13.567 Library pcap found: NO 00:02:13.567 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.567 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.567 Compiler for C supports arguments -Wformat: YES 00:02:13.567 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:13.567 Compiler for C supports arguments -Wformat-security: YES 00:02:13.567 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.567 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.567 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.567 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.567 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.567 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.567 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.567 Compiler for C supports arguments -Wundef: YES 00:02:13.567 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.567 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.567 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.567 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.567 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.567 Compiler for C supports arguments -mavx512f: YES 00:02:13.567 Checking if "AVX512 checking" compiles: YES 00:02:13.567 Fetching value of define "__SSE4_2__" : 1 00:02:13.567 Fetching value of define "__AES__" : 1 00:02:13.567 Fetching value of define "__AVX__" : 1 00:02:13.567 Fetching value of define "__AVX2__" : 1 00:02:13.567 Fetching value of define "__AVX512BW__" : 1 00:02:13.567 Fetching value of define "__AVX512CD__" : 1 00:02:13.567 Fetching value of define "__AVX512DQ__" : 1 00:02:13.567 Fetching value of define "__AVX512F__" : 1 00:02:13.567 Fetching value of define "__AVX512VL__" : 1 00:02:13.567 Fetching value of define "__PCLMUL__" : 1 00:02:13.567 Fetching value of define "__RDRND__" : 1 00:02:13.567 Fetching value of define "__RDSEED__" : 1 00:02:13.567 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.567 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.567 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.567 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.567 Checking for function "getentropy" : YES 00:02:13.567 Message: lib/eal: Defining dependency "eal" 00:02:13.567 Message: lib/ring: Defining dependency "ring" 00:02:13.567 Message: lib/rcu: Defining dependency "rcu" 00:02:13.567 Message: lib/mempool: Defining dependency "mempool" 00:02:13.567 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.567 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.567 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.567 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.567 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.567 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.567 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:13.567 Compiler for C supports arguments -mpclmul: YES 00:02:13.567 Compiler for C supports arguments -maes: YES 00:02:13.567 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.567 Compiler for C supports arguments -mavx512bw: YES 00:02:13.567 Compiler for C supports arguments -mavx512dq: YES 00:02:13.567 Compiler for C supports arguments -mavx512vl: YES 00:02:13.567 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.567 Compiler for C supports arguments -mavx2: YES 00:02:13.567 Compiler for C supports arguments -mavx: YES 00:02:13.567 Message: lib/net: Defining dependency "net" 00:02:13.567 Message: lib/meter: Defining dependency "meter" 00:02:13.567 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.567 Message: lib/pci: Defining dependency "pci" 00:02:13.567 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.567 Message: lib/metrics: Defining dependency "metrics" 00:02:13.567 Message: lib/hash: Defining dependency "hash" 00:02:13.567 Message: lib/timer: Defining dependency "timer" 00:02:13.567 Fetching value of define "__AVX2__" : 1 (cached) 00:02:13.567 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.567 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.567 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:13.567 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.567 Message: lib/acl: Defining dependency "acl" 00:02:13.567 Message: lib/bbdev: Defining dependency "bbdev" 00:02:13.567 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:13.567 Run-time dependency libelf found: YES 0.190 00:02:13.567 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:02:13.567 Message: lib/bpf: Defining dependency "bpf" 00:02:13.567 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:13.567 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.567 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.567 Message: lib/distributor: Defining dependency "distributor" 00:02:13.567 Message: lib/efd: Defining dependency "efd" 00:02:13.567 Message: lib/eventdev: Defining dependency "eventdev" 00:02:13.567 Message: lib/gpudev: Defining dependency "gpudev" 00:02:13.567 Message: lib/gro: Defining dependency "gro" 00:02:13.567 Message: lib/gso: Defining dependency "gso" 00:02:13.567 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:13.568 Message: lib/jobstats: Defining dependency "jobstats" 00:02:13.568 Message: lib/latencystats: Defining dependency "latencystats" 00:02:13.568 Message: lib/lpm: Defining dependency "lpm" 00:02:13.568 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.568 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.568 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:13.568 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:13.568 Message: lib/member: Defining dependency "member" 00:02:13.568 Message: lib/pcapng: Defining dependency "pcapng" 00:02:13.568 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.568 Message: lib/power: Defining dependency "power" 00:02:13.568 Message: lib/rawdev: Defining dependency "rawdev" 00:02:13.568 Message: lib/regexdev: Defining dependency "regexdev" 00:02:13.568 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.568 Message: lib/rib: Defining dependency "rib" 00:02:13.568 Message: lib/reorder: Defining dependency "reorder" 00:02:13.568 Message: lib/sched: Defining dependency "sched" 00:02:13.568 Message: lib/security: Defining dependency "security" 00:02:13.568 Message: lib/stack: Defining dependency "stack" 00:02:13.568 Has header "linux/userfaultfd.h" : YES 00:02:13.568 Message: lib/vhost: Defining dependency "vhost" 00:02:13.568 Message: lib/ipsec: Defining dependency "ipsec" 00:02:13.568 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.568 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.568 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.568 Message: lib/fib: Defining dependency "fib" 00:02:13.568 Message: lib/port: Defining dependency "port" 00:02:13.568 Message: lib/pdump: Defining dependency "pdump" 00:02:13.568 Message: lib/table: Defining dependency "table" 00:02:13.568 Message: lib/pipeline: Defining dependency "pipeline" 00:02:13.568 Message: lib/graph: Defining dependency "graph" 00:02:13.568 Message: lib/node: Defining dependency "node" 00:02:13.568 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.568 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.568 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.568 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.568 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:13.568 Compiler for C supports arguments -Wno-unused-value: YES 00:02:13.568 Compiler for C supports arguments -Wno-format: YES 00:02:13.568 Compiler for C supports arguments -Wno-format-security: YES 00:02:13.568 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:14.961 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:14.961 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:14.961 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:14.961 Fetching value of define "__AVX2__" : 1 (cached) 00:02:14.961 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.961 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.961 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.961 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:14.961 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:14.961 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:14.961 Program doxygen found: YES (/usr/bin/doxygen) 00:02:14.961 Configuring doxy-api.conf using configuration 00:02:14.961 Program sphinx-build found: NO 00:02:14.961 Configuring rte_build_config.h using configuration 00:02:14.961 Message: 00:02:14.961 ================= 00:02:14.961 Applications Enabled 00:02:14.961 ================= 00:02:14.961 00:02:14.961 apps: 00:02:14.961 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:02:14.961 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:02:14.961 00:02:14.961 00:02:14.961 Message: 00:02:14.961 ================= 00:02:14.961 Libraries Enabled 00:02:14.961 ================= 00:02:14.961 00:02:14.961 libs: 00:02:14.961 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:14.961 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:14.961 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:14.961 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:14.961 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:14.961 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:14.961 table, pipeline, graph, node, 00:02:14.961 00:02:14.961 Message: 00:02:14.961 =============== 00:02:14.961 Drivers Enabled 00:02:14.961 =============== 00:02:14.961 00:02:14.961 common: 00:02:14.961 00:02:14.961 bus: 00:02:14.961 pci, vdev, 00:02:14.961 mempool: 00:02:14.961 ring, 00:02:14.961 dma: 00:02:14.961 00:02:14.961 net: 00:02:14.961 i40e, 00:02:14.961 raw: 00:02:14.961 00:02:14.961 crypto: 00:02:14.961 00:02:14.961 compress: 00:02:14.961 00:02:14.961 regex: 00:02:14.961 00:02:14.961 vdpa: 00:02:14.961 00:02:14.961 event: 00:02:14.961 00:02:14.961 baseband: 00:02:14.961 00:02:14.961 gpu: 00:02:14.961 00:02:14.961 00:02:14.961 Message: 00:02:14.961 ================= 00:02:14.961 Content Skipped 00:02:14.961 ================= 00:02:14.961 00:02:14.961 apps: 00:02:14.961 dumpcap: missing dependency, "libpcap" 00:02:14.961 00:02:14.961 libs: 00:02:14.961 kni: explicitly disabled via build config (deprecated lib) 00:02:14.961 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:14.961 00:02:14.961 drivers: 00:02:14.961 common/cpt: not in enabled drivers build config 00:02:14.961 common/dpaax: not in enabled drivers build config 00:02:14.961 common/iavf: not in enabled drivers build config 00:02:14.961 common/idpf: not in enabled drivers build config 00:02:14.961 common/mvep: not in enabled drivers build config 00:02:14.961 common/octeontx: not in enabled drivers build config 00:02:14.961 bus/auxiliary: not in enabled drivers build config 00:02:14.961 bus/dpaa: not in enabled drivers build config 00:02:14.961 bus/fslmc: not in enabled drivers build config 00:02:14.961 bus/ifpga: not in enabled drivers build config 00:02:14.961 bus/vmbus: not in enabled drivers build config 00:02:14.961 common/cnxk: not in enabled drivers build config 00:02:14.961 common/mlx5: not in enabled drivers build config 00:02:14.961 common/qat: not in enabled drivers build config 00:02:14.961 common/sfc_efx: not in enabled drivers build config 00:02:14.961 mempool/bucket: not in enabled drivers build config 00:02:14.961 mempool/cnxk: not in enabled drivers build config 00:02:14.961 mempool/dpaa: not in enabled drivers build config 00:02:14.961 mempool/dpaa2: not in enabled drivers build config 00:02:14.961 mempool/octeontx: not in enabled drivers build config 00:02:14.961 mempool/stack: not in enabled drivers build config 00:02:14.961 dma/cnxk: not in enabled drivers build config 00:02:14.961 dma/dpaa: not in enabled drivers build config 00:02:14.961 dma/dpaa2: not in enabled drivers build config 00:02:14.961 dma/hisilicon: not in enabled drivers build config 00:02:14.961 dma/idxd: not in enabled drivers build config 00:02:14.961 dma/ioat: not in enabled drivers build config 00:02:14.961 dma/skeleton: not in enabled drivers build config 00:02:14.961 net/af_packet: not in enabled drivers build config 00:02:14.961 net/af_xdp: not in enabled drivers build config 00:02:14.961 net/ark: not in enabled drivers build config 00:02:14.961 net/atlantic: not in enabled drivers build config 00:02:14.961 net/avp: not in enabled drivers build config 00:02:14.961 net/axgbe: not in enabled drivers build config 00:02:14.961 net/bnx2x: not in enabled drivers build config 00:02:14.961 net/bnxt: not in enabled drivers build config 00:02:14.961 net/bonding: not in enabled drivers build config 00:02:14.961 net/cnxk: not in enabled drivers build config 00:02:14.961 net/cxgbe: not in enabled drivers build config 00:02:14.961 net/dpaa: not in enabled drivers build config 00:02:14.961 net/dpaa2: not in enabled drivers build config 00:02:14.961 net/e1000: not in enabled drivers build config 00:02:14.961 net/ena: not in enabled drivers build config 00:02:14.961 net/enetc: not in enabled drivers build config 00:02:14.961 net/enetfec: not in enabled drivers build config 00:02:14.961 net/enic: not in enabled drivers build config 00:02:14.961 net/failsafe: not in enabled drivers build config 00:02:14.961 net/fm10k: not in enabled drivers build config 00:02:14.961 net/gve: not in enabled drivers build config 00:02:14.961 net/hinic: not in enabled drivers build config 00:02:14.961 net/hns3: not in enabled drivers build config 00:02:14.961 net/iavf: not in enabled drivers build config 00:02:14.961 net/ice: not in enabled drivers build config 00:02:14.961 net/idpf: not in enabled drivers build config 00:02:14.961 net/igc: not in enabled drivers build config 00:02:14.961 net/ionic: not in enabled drivers build config 00:02:14.961 net/ipn3ke: not in enabled drivers build config 00:02:14.961 net/ixgbe: not in enabled drivers build config 00:02:14.961 net/kni: not in enabled drivers build config 00:02:14.961 net/liquidio: not in enabled drivers build config 00:02:14.961 net/mana: not in enabled drivers build config 00:02:14.961 net/memif: not in enabled drivers build config 00:02:14.961 net/mlx4: not in enabled drivers build config 00:02:14.961 net/mlx5: not in enabled drivers build config 00:02:14.961 net/mvneta: not in enabled drivers build config 00:02:14.961 net/mvpp2: not in enabled drivers build config 00:02:14.961 net/netvsc: not in enabled drivers build config 00:02:14.961 net/nfb: not in enabled drivers build config 00:02:14.961 net/nfp: not in enabled drivers build config 00:02:14.961 net/ngbe: not in enabled drivers build config 00:02:14.961 net/null: not in enabled drivers build config 00:02:14.961 net/octeontx: not in enabled drivers build config 00:02:14.961 net/octeon_ep: not in enabled drivers build config 00:02:14.961 net/pcap: not in enabled drivers build config 00:02:14.961 net/pfe: not in enabled drivers build config 00:02:14.961 net/qede: not in enabled drivers build config 00:02:14.961 net/ring: not in enabled drivers build config 00:02:14.961 net/sfc: not in enabled drivers build config 00:02:14.961 net/softnic: not in enabled drivers build config 00:02:14.961 net/tap: not in enabled drivers build config 00:02:14.961 net/thunderx: not in enabled drivers build config 00:02:14.961 net/txgbe: not in enabled drivers build config 00:02:14.961 net/vdev_netvsc: not in enabled drivers build config 00:02:14.961 net/vhost: not in enabled drivers build config 00:02:14.961 net/virtio: not in enabled drivers build config 00:02:14.961 net/vmxnet3: not in enabled drivers build config 00:02:14.961 raw/cnxk_bphy: not in enabled drivers build config 00:02:14.961 raw/cnxk_gpio: not in enabled drivers build config 00:02:14.961 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:14.961 raw/ifpga: not in enabled drivers build config 00:02:14.961 raw/ntb: not in enabled drivers build config 00:02:14.962 raw/skeleton: not in enabled drivers build config 00:02:14.962 crypto/armv8: not in enabled drivers build config 00:02:14.962 crypto/bcmfs: not in enabled drivers build config 00:02:14.962 crypto/caam_jr: not in enabled drivers build config 00:02:14.962 crypto/ccp: not in enabled drivers build config 00:02:14.962 crypto/cnxk: not in enabled drivers build config 00:02:14.962 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.962 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.962 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.962 crypto/mlx5: not in enabled drivers build config 00:02:14.962 crypto/mvsam: not in enabled drivers build config 00:02:14.962 crypto/nitrox: not in enabled drivers build config 00:02:14.962 crypto/null: not in enabled drivers build config 00:02:14.962 crypto/octeontx: not in enabled drivers build config 00:02:14.962 crypto/openssl: not in enabled drivers build config 00:02:14.962 crypto/scheduler: not in enabled drivers build config 00:02:14.962 crypto/uadk: not in enabled drivers build config 00:02:14.962 crypto/virtio: not in enabled drivers build config 00:02:14.962 compress/isal: not in enabled drivers build config 00:02:14.962 compress/mlx5: not in enabled drivers build config 00:02:14.962 compress/octeontx: not in enabled drivers build config 00:02:14.962 compress/zlib: not in enabled drivers build config 00:02:14.962 regex/mlx5: not in enabled drivers build config 00:02:14.962 regex/cn9k: not in enabled drivers build config 00:02:14.962 vdpa/ifc: not in enabled drivers build config 00:02:14.962 vdpa/mlx5: not in enabled drivers build config 00:02:14.962 vdpa/sfc: not in enabled drivers build config 00:02:14.962 event/cnxk: not in enabled drivers build config 00:02:14.962 event/dlb2: not in enabled drivers build config 00:02:14.962 event/dpaa: not in enabled drivers build config 00:02:14.962 event/dpaa2: not in enabled drivers build config 00:02:14.962 event/dsw: not in enabled drivers build config 00:02:14.962 event/opdl: not in enabled drivers build config 00:02:14.962 event/skeleton: not in enabled drivers build config 00:02:14.962 event/sw: not in enabled drivers build config 00:02:14.962 event/octeontx: not in enabled drivers build config 00:02:14.962 baseband/acc: not in enabled drivers build config 00:02:14.962 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:14.962 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:14.962 baseband/la12xx: not in enabled drivers build config 00:02:14.962 baseband/null: not in enabled drivers build config 00:02:14.962 baseband/turbo_sw: not in enabled drivers build config 00:02:14.962 gpu/cuda: not in enabled drivers build config 00:02:14.962 00:02:14.962 00:02:14.962 Build targets in project: 310 00:02:14.962 00:02:14.962 DPDK 22.11.4 00:02:14.962 00:02:14.962 User defined options 00:02:14.962 libdir : lib 00:02:14.962 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:14.962 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:14.962 c_link_args : 00:02:14.962 enable_docs : false 00:02:14.962 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:14.962 enable_kmods : false 00:02:14.962 machine : native 00:02:14.962 tests : false 00:02:14.962 00:02:14.962 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:02:14.962 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:14.962 14:56:10 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:14.962 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:15.232 [1/737] Generating lib/rte_kvargs_mingw with a custom command 00:02:15.232 [2/737] Generating lib/rte_kvargs_def with a custom command 00:02:15.232 [3/737] Generating lib/rte_telemetry_mingw with a custom command 00:02:15.232 [4/737] Generating lib/rte_telemetry_def with a custom command 00:02:15.232 [5/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.232 [6/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.232 [7/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.232 [8/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.232 [9/737] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.232 [10/737] Linking static target lib/librte_kvargs.a 00:02:15.232 [11/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.232 [12/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.232 [13/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.232 [14/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.232 [15/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.490 [16/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.490 [17/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.490 [18/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.490 [19/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.490 [20/737] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.490 [21/737] Linking target lib/librte_kvargs.so.23.0 00:02:15.490 [22/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:15.490 [23/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.490 [24/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.490 [25/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.747 [26/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.747 [27/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.747 [28/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.747 [29/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.747 [30/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.748 [31/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.748 [32/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.748 [33/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.748 [34/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.748 [35/737] Linking static target lib/librte_telemetry.a 00:02:15.748 [36/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.748 [37/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.005 [38/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.005 [39/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.005 [40/737] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:16.005 [41/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.005 [42/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.264 [43/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.264 [44/737] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.264 [45/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.264 [46/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.264 [47/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.264 [48/737] Linking target lib/librte_telemetry.so.23.0 00:02:16.264 [49/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.264 [50/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.264 [51/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.264 [52/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.264 [53/737] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:16.264 [54/737] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.264 [55/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.523 [56/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.523 [57/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.523 [58/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.523 [59/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.523 [60/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.523 [61/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.523 [62/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.523 [63/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.523 [64/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.523 [65/737] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.523 [66/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:16.523 [67/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.523 [68/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.523 [69/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.523 [70/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.781 [71/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.781 [72/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.781 [73/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.781 [74/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.781 [75/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.781 [76/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.781 [77/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.781 [78/737] Generating lib/rte_eal_def with a custom command 00:02:16.781 [79/737] Generating lib/rte_eal_mingw with a custom command 00:02:16.781 [80/737] Generating lib/rte_ring_mingw with a custom command 00:02:16.781 [81/737] Generating lib/rte_ring_def with a custom command 00:02:16.781 [82/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.781 [83/737] Generating lib/rte_rcu_def with a custom command 00:02:16.781 [84/737] Generating lib/rte_rcu_mingw with a custom command 00:02:16.781 [85/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.781 [86/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.039 [87/737] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.039 [88/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.039 [89/737] Linking static target lib/librte_ring.a 00:02:17.039 [90/737] Generating lib/rte_mempool_def with a custom command 00:02:17.039 [91/737] Generating lib/rte_mempool_mingw with a custom command 00:02:17.040 [92/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.040 [93/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.040 [94/737] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.298 [95/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.298 [96/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.298 [97/737] Generating lib/rte_mbuf_def with a custom command 00:02:17.298 [98/737] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.298 [99/737] Linking static target lib/librte_eal.a 00:02:17.298 [100/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.298 [101/737] Generating lib/rte_mbuf_mingw with a custom command 00:02:17.556 [102/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.556 [103/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.556 [104/737] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.556 [105/737] Linking static target lib/librte_rcu.a 00:02:17.814 [106/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.814 [107/737] Linking static target lib/librte_mempool.a 00:02:17.814 [108/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.815 [109/737] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.815 [110/737] Generating lib/rte_net_def with a custom command 00:02:17.815 [111/737] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.815 [112/737] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.815 [113/737] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.815 [114/737] Generating lib/rte_net_mingw with a custom command 00:02:17.815 [115/737] Generating lib/rte_meter_def with a custom command 00:02:17.815 [116/737] Generating lib/rte_meter_mingw with a custom command 00:02:18.075 [117/737] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.075 [118/737] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.075 [119/737] Linking static target lib/librte_meter.a 00:02:18.075 [120/737] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.075 [121/737] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.075 [122/737] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.075 [123/737] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.075 [124/737] Linking static target lib/librte_net.a 00:02:18.337 [125/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.337 [126/737] Linking static target lib/librte_mbuf.a 00:02:18.337 [127/737] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.337 [128/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.595 [129/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.595 [130/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.595 [131/737] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.595 [132/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.595 [133/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.853 [134/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.853 [135/737] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.112 [136/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.112 [137/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.112 [138/737] Generating lib/rte_ethdev_def with a custom command 00:02:19.112 [139/737] Generating lib/rte_ethdev_mingw with a custom command 00:02:19.112 [140/737] Generating lib/rte_pci_def with a custom command 00:02:19.112 [141/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.112 [142/737] Generating lib/rte_pci_mingw with a custom command 00:02:19.112 [143/737] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.112 [144/737] Linking static target lib/librte_pci.a 00:02:19.112 [145/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.371 [146/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.371 [147/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.371 [148/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.371 [149/737] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.371 [150/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.371 [151/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.630 [152/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.630 [153/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.630 [154/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.630 [155/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.630 [156/737] Generating lib/rte_cmdline_def with a custom command 00:02:19.630 [157/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.630 [158/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.630 [159/737] Generating lib/rte_cmdline_mingw with a custom command 00:02:19.630 [160/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.630 [161/737] Generating lib/rte_metrics_def with a custom command 00:02:19.630 [162/737] Generating lib/rte_metrics_mingw with a custom command 00:02:19.630 [163/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.888 [164/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.888 [165/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:19.888 [166/737] Generating lib/rte_hash_def with a custom command 00:02:19.888 [167/737] Generating lib/rte_hash_mingw with a custom command 00:02:19.888 [168/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.888 [169/737] Linking static target lib/librte_cmdline.a 00:02:19.888 [170/737] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.888 [171/737] Generating lib/rte_timer_def with a custom command 00:02:19.888 [172/737] Generating lib/rte_timer_mingw with a custom command 00:02:19.888 [173/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.147 [174/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:20.147 [175/737] Linking static target lib/librte_metrics.a 00:02:20.147 [176/737] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.147 [177/737] Linking static target lib/librte_timer.a 00:02:20.406 [178/737] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.665 [179/737] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.665 [180/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:20.665 [181/737] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:20.665 [182/737] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.924 [183/737] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.924 [184/737] Generating lib/rte_acl_def with a custom command 00:02:20.924 [185/737] Generating lib/rte_acl_mingw with a custom command 00:02:20.924 [186/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.924 [187/737] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:20.924 [188/737] Linking static target lib/librte_ethdev.a 00:02:20.924 [189/737] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:20.924 [190/737] Generating lib/rte_bbdev_def with a custom command 00:02:20.924 [191/737] Generating lib/rte_bbdev_mingw with a custom command 00:02:21.184 [192/737] Generating lib/rte_bitratestats_def with a custom command 00:02:21.184 [193/737] Generating lib/rte_bitratestats_mingw with a custom command 00:02:21.443 [194/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:21.443 [195/737] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:21.443 [196/737] Linking static target lib/librte_bitratestats.a 00:02:21.443 [197/737] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:21.702 [198/737] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:21.702 [199/737] Linking static target lib/librte_bbdev.a 00:02:21.702 [200/737] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.983 [201/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:22.259 [202/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:22.259 [203/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:22.259 [204/737] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.259 [205/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:22.259 [206/737] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.259 [207/737] Linking static target lib/librte_hash.a 00:02:22.517 [208/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:22.517 [209/737] Generating lib/rte_bpf_def with a custom command 00:02:22.775 [210/737] Generating lib/rte_bpf_mingw with a custom command 00:02:22.775 [211/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:22.775 [212/737] Generating lib/rte_cfgfile_def with a custom command 00:02:22.775 [213/737] Generating lib/rte_cfgfile_mingw with a custom command 00:02:22.775 [214/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:23.034 [215/737] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:23.034 [216/737] Linking static target lib/librte_cfgfile.a 00:02:23.034 [217/737] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.034 [218/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:23.292 [219/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:23.293 [220/737] Generating lib/rte_compressdev_def with a custom command 00:02:23.293 [221/737] Generating lib/rte_compressdev_mingw with a custom command 00:02:23.293 [222/737] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.293 [223/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:23.293 [224/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:23.293 [225/737] Linking static target lib/librte_bpf.a 00:02:23.293 [226/737] Generating lib/rte_cryptodev_def with a custom command 00:02:23.293 [227/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:23.293 [228/737] Generating lib/rte_cryptodev_mingw with a custom command 00:02:23.293 [229/737] Linking static target lib/librte_acl.a 00:02:23.551 [230/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:23.809 [231/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:23.809 [232/737] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.809 [233/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.809 [234/737] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.809 [235/737] Linking static target lib/librte_compressdev.a 00:02:23.809 [236/737] Generating lib/rte_distributor_def with a custom command 00:02:23.809 [237/737] Generating lib/rte_distributor_mingw with a custom command 00:02:23.809 [238/737] Generating lib/rte_efd_def with a custom command 00:02:23.809 [239/737] Generating lib/rte_efd_mingw with a custom command 00:02:23.809 [240/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.069 [241/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:24.069 [242/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.327 [243/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:24.327 [244/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:24.327 [245/737] Linking static target lib/librte_distributor.a 00:02:24.327 [246/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:24.585 [247/737] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.585 [248/737] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.843 [249/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:24.843 [250/737] Generating lib/rte_eventdev_def with a custom command 00:02:24.843 [251/737] Generating lib/rte_eventdev_mingw with a custom command 00:02:25.407 [252/737] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:25.407 [253/737] Linking static target lib/librte_efd.a 00:02:25.407 [254/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.407 [255/737] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.407 [256/737] Generating lib/rte_gpudev_def with a custom command 00:02:25.407 [257/737] Generating lib/rte_gpudev_mingw with a custom command 00:02:25.407 [258/737] Linking target lib/librte_eal.so.23.0 00:02:25.407 [259/737] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.665 [260/737] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:25.665 [261/737] Linking target lib/librte_ring.so.23.0 00:02:25.665 [262/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:25.665 [263/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.665 [264/737] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:25.665 [265/737] Linking target lib/librte_meter.so.23.0 00:02:25.665 [266/737] Linking target lib/librte_pci.so.23.0 00:02:25.665 [267/737] Linking target lib/librte_timer.so.23.0 00:02:25.665 [268/737] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:25.665 [269/737] Linking target lib/librte_rcu.so.23.0 00:02:25.922 [270/737] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:25.922 [271/737] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:25.922 [272/737] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:25.922 [273/737] Linking target lib/librte_mempool.so.23.0 00:02:25.922 [274/737] Linking target lib/librte_acl.so.23.0 00:02:25.922 [275/737] Linking target lib/librte_cfgfile.so.23.0 00:02:25.922 [276/737] Linking static target lib/librte_cryptodev.a 00:02:25.922 [277/737] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:25.922 [278/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:25.922 [279/737] Linking static target lib/librte_gpudev.a 00:02:25.922 [280/737] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:25.922 [281/737] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:25.922 [282/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:25.922 [283/737] Linking target lib/librte_mbuf.so.23.0 00:02:26.178 [284/737] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:26.178 [285/737] Generating lib/rte_gro_def with a custom command 00:02:26.178 [286/737] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:26.178 [287/737] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.178 [288/737] Generating lib/rte_gro_mingw with a custom command 00:02:26.179 [289/737] Linking target lib/librte_net.so.23.0 00:02:26.179 [290/737] Linking target lib/librte_bbdev.so.23.0 00:02:26.179 [291/737] Linking target lib/librte_compressdev.so.23.0 00:02:26.179 [292/737] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:26.179 [293/737] Linking target lib/librte_distributor.so.23.0 00:02:26.437 [294/737] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:26.437 [295/737] Linking target lib/librte_ethdev.so.23.0 00:02:26.437 [296/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:26.437 [297/737] Linking target lib/librte_cmdline.so.23.0 00:02:26.437 [298/737] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:26.695 [299/737] Linking target lib/librte_hash.so.23.0 00:02:26.695 [300/737] Linking target lib/librte_metrics.so.23.0 00:02:26.695 [301/737] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:26.695 [302/737] Linking target lib/librte_bpf.so.23.0 00:02:26.695 [303/737] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:26.695 [304/737] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:26.695 [305/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:26.695 [306/737] Linking static target lib/librte_eventdev.a 00:02:26.695 [307/737] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:26.695 [308/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:26.695 [309/737] Linking static target lib/librte_gro.a 00:02:26.695 [310/737] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:26.695 [311/737] Linking target lib/librte_bitratestats.so.23.0 00:02:26.695 [312/737] Linking target lib/librte_efd.so.23.0 00:02:26.695 [313/737] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:26.695 [314/737] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.695 [315/737] Generating lib/rte_gso_def with a custom command 00:02:26.952 [316/737] Generating lib/rte_gso_mingw with a custom command 00:02:26.952 [317/737] Linking target lib/librte_gpudev.so.23.0 00:02:26.952 [318/737] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.952 [319/737] Linking target lib/librte_gro.so.23.0 00:02:27.209 [320/737] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:27.209 [321/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:27.209 [322/737] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:27.209 [323/737] Generating lib/rte_ip_frag_def with a custom command 00:02:27.467 [324/737] Generating lib/rte_ip_frag_mingw with a custom command 00:02:27.467 [325/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:27.467 [326/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:27.467 [327/737] Linking static target lib/librte_gso.a 00:02:27.467 [328/737] Generating lib/rte_jobstats_def with a custom command 00:02:27.467 [329/737] Generating lib/rte_jobstats_mingw with a custom command 00:02:27.467 [330/737] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:27.467 [331/737] Linking static target lib/librte_jobstats.a 00:02:27.467 [332/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:27.467 [333/737] Generating lib/rte_latencystats_def with a custom command 00:02:27.724 [334/737] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.724 [335/737] Generating lib/rte_latencystats_mingw with a custom command 00:02:27.724 [336/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:27.724 [337/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:27.724 [338/737] Linking target lib/librte_gso.so.23.0 00:02:27.724 [339/737] Generating lib/rte_lpm_mingw with a custom command 00:02:27.724 [340/737] Generating lib/rte_lpm_def with a custom command 00:02:27.984 [341/737] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.984 [342/737] Linking target lib/librte_jobstats.so.23.0 00:02:27.984 [343/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:27.984 [344/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:27.984 [345/737] Linking static target lib/librte_ip_frag.a 00:02:28.260 [346/737] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:28.260 [347/737] Linking static target lib/librte_latencystats.a 00:02:28.260 [348/737] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:28.260 [349/737] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:28.260 [350/737] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:28.260 [351/737] Generating lib/rte_member_def with a custom command 00:02:28.260 [352/737] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.260 [353/737] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.260 [354/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:28.260 [355/737] Generating lib/rte_member_mingw with a custom command 00:02:28.517 [356/737] Generating lib/rte_pcapng_def with a custom command 00:02:28.517 [357/737] Linking target lib/librte_ip_frag.so.23.0 00:02:28.517 [358/737] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.517 [359/737] Linking target lib/librte_cryptodev.so.23.0 00:02:28.517 [360/737] Generating lib/rte_pcapng_mingw with a custom command 00:02:28.517 [361/737] Linking target lib/librte_latencystats.so.23.0 00:02:28.517 [362/737] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:28.517 [363/737] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:28.517 [364/737] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.774 [365/737] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.774 [366/737] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.774 [367/737] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:28.774 [368/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:28.774 [369/737] Linking static target lib/librte_lpm.a 00:02:29.031 [370/737] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.031 [371/737] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:29.031 [372/737] Linking target lib/librte_eventdev.so.23.0 00:02:29.031 [373/737] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:29.031 [374/737] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.031 [375/737] Generating lib/rte_power_def with a custom command 00:02:29.031 [376/737] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:29.031 [377/737] Generating lib/rte_power_mingw with a custom command 00:02:29.289 [378/737] Generating lib/rte_rawdev_def with a custom command 00:02:29.289 [379/737] Generating lib/rte_rawdev_mingw with a custom command 00:02:29.289 [380/737] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:29.289 [381/737] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.289 [382/737] Generating lib/rte_regexdev_def with a custom command 00:02:29.289 [383/737] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:29.289 [384/737] Linking static target lib/librte_pcapng.a 00:02:29.289 [385/737] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.289 [386/737] Generating lib/rte_regexdev_mingw with a custom command 00:02:29.289 [387/737] Linking target lib/librte_lpm.so.23.0 00:02:29.289 [388/737] Generating lib/rte_dmadev_def with a custom command 00:02:29.289 [389/737] Generating lib/rte_dmadev_mingw with a custom command 00:02:29.289 [390/737] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:29.547 [391/737] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:29.547 [392/737] Generating lib/rte_rib_def with a custom command 00:02:29.547 [393/737] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:29.547 [394/737] Generating lib/rte_rib_mingw with a custom command 00:02:29.547 [395/737] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.547 [396/737] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:29.547 [397/737] Generating lib/rte_reorder_def with a custom command 00:02:29.547 [398/737] Linking static target lib/librte_rawdev.a 00:02:29.547 [399/737] Generating lib/rte_reorder_mingw with a custom command 00:02:29.547 [400/737] Linking target lib/librte_pcapng.so.23.0 00:02:29.806 [401/737] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:29.806 [402/737] Linking static target lib/librte_dmadev.a 00:02:29.806 [403/737] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:29.806 [404/737] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:29.806 [405/737] Linking static target lib/librte_member.a 00:02:29.806 [406/737] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:29.806 [407/737] Linking static target lib/librte_power.a 00:02:29.806 [408/737] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:29.806 [409/737] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:29.806 [410/737] Linking static target lib/librte_regexdev.a 00:02:30.064 [411/737] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:30.064 [412/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:30.064 [413/737] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.064 [414/737] Generating lib/rte_sched_def with a custom command 00:02:30.064 [415/737] Linking target lib/librte_rawdev.so.23.0 00:02:30.064 [416/737] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:30.064 [417/737] Generating lib/rte_sched_mingw with a custom command 00:02:30.064 [418/737] Generating lib/rte_security_def with a custom command 00:02:30.064 [419/737] Generating lib/rte_security_mingw with a custom command 00:02:30.065 [420/737] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.065 [421/737] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.065 [422/737] Linking static target lib/librte_reorder.a 00:02:30.323 [423/737] Linking target lib/librte_member.so.23.0 00:02:30.323 [424/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:30.323 [425/737] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.323 [426/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:30.323 [427/737] Generating lib/rte_stack_def with a custom command 00:02:30.323 [428/737] Linking target lib/librte_dmadev.so.23.0 00:02:30.323 [429/737] Generating lib/rte_stack_mingw with a custom command 00:02:30.323 [430/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:30.323 [431/737] Linking static target lib/librte_stack.a 00:02:30.323 [432/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:30.323 [433/737] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:30.582 [434/737] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.582 [435/737] Linking static target lib/librte_rib.a 00:02:30.582 [436/737] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.582 [437/737] Linking target lib/librte_reorder.so.23.0 00:02:30.582 [438/737] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.582 [439/737] Linking target lib/librte_stack.so.23.0 00:02:30.582 [440/737] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.841 [441/737] Linking target lib/librte_regexdev.so.23.0 00:02:30.841 [442/737] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:30.841 [443/737] Linking static target lib/librte_security.a 00:02:30.841 [444/737] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.841 [445/737] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.841 [446/737] Linking target lib/librte_power.so.23.0 00:02:30.841 [447/737] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.098 [448/737] Linking target lib/librte_rib.so.23.0 00:02:31.098 [449/737] Generating lib/rte_vhost_def with a custom command 00:02:31.098 [450/737] Generating lib/rte_vhost_mingw with a custom command 00:02:31.098 [451/737] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.098 [452/737] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:31.356 [453/737] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.356 [454/737] Linking target lib/librte_security.so.23.0 00:02:31.356 [455/737] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:31.356 [456/737] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.356 [457/737] Linking static target lib/librte_sched.a 00:02:31.356 [458/737] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:31.615 [459/737] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:31.875 [460/737] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.875 [461/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.875 [462/737] Generating lib/rte_ipsec_def with a custom command 00:02:31.875 [463/737] Linking target lib/librte_sched.so.23.0 00:02:31.875 [464/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:31.875 [465/737] Generating lib/rte_ipsec_mingw with a custom command 00:02:32.133 [466/737] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:32.133 [467/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:32.133 [468/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.133 [469/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:32.391 [470/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:32.391 [471/737] Generating lib/rte_fib_def with a custom command 00:02:32.391 [472/737] Generating lib/rte_fib_mingw with a custom command 00:02:32.391 [473/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:32.650 [474/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:32.908 [475/737] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:32.908 [476/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:32.908 [477/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:32.908 [478/737] Linking static target lib/librte_ipsec.a 00:02:33.166 [479/737] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:33.166 [480/737] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:33.166 [481/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:33.166 [482/737] Linking static target lib/librte_fib.a 00:02:33.425 [483/737] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:33.425 [484/737] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.425 [485/737] Linking target lib/librte_ipsec.so.23.0 00:02:33.425 [486/737] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:33.425 [487/737] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.425 [488/737] Linking target lib/librte_fib.so.23.0 00:02:33.692 [489/737] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:33.692 [490/737] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:33.950 [491/737] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:33.950 [492/737] Generating lib/rte_port_def with a custom command 00:02:33.950 [493/737] Generating lib/rte_port_mingw with a custom command 00:02:34.209 [494/737] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:34.209 [495/737] Generating lib/rte_pdump_def with a custom command 00:02:34.209 [496/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:34.209 [497/737] Generating lib/rte_pdump_mingw with a custom command 00:02:34.209 [498/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:34.209 [499/737] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:34.467 [500/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:34.467 [501/737] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:34.467 [502/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:34.726 [503/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:34.726 [504/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:34.726 [505/737] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:34.726 [506/737] Linking static target lib/librte_port.a 00:02:34.993 [507/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:34.993 [508/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:34.993 [509/737] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:34.993 [510/737] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:34.993 [511/737] Linking static target lib/librte_pdump.a 00:02:34.993 [512/737] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:35.255 [513/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:35.513 [514/737] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.513 [515/737] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.513 [516/737] Linking target lib/librte_pdump.so.23.0 00:02:35.513 [517/737] Linking target lib/librte_port.so.23.0 00:02:35.513 [518/737] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:35.772 [519/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:35.772 [520/737] Generating lib/rte_table_def with a custom command 00:02:35.772 [521/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:35.772 [522/737] Generating lib/rte_table_mingw with a custom command 00:02:35.772 [523/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:36.030 [524/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:36.030 [525/737] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:36.030 [526/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:36.287 [527/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:36.287 [528/737] Generating lib/rte_pipeline_def with a custom command 00:02:36.287 [529/737] Generating lib/rte_pipeline_mingw with a custom command 00:02:36.288 [530/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:36.288 [531/737] Linking static target lib/librte_table.a 00:02:36.288 [532/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:36.546 [533/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:36.803 [534/737] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:36.803 [535/737] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:37.060 [536/737] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.060 [537/737] Linking target lib/librte_table.so.23.0 00:02:37.060 [538/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.060 [539/737] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:37.060 [540/737] Generating lib/rte_graph_def with a custom command 00:02:37.060 [541/737] Generating lib/rte_graph_mingw with a custom command 00:02:37.060 [542/737] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:37.318 [543/737] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:37.318 [544/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:37.575 [545/737] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:37.575 [546/737] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:37.575 [547/737] Linking static target lib/librte_graph.a 00:02:37.575 [548/737] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:37.833 [549/737] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:37.833 [550/737] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:37.833 [551/737] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:38.118 [552/737] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:38.118 [553/737] Generating lib/rte_node_def with a custom command 00:02:38.118 [554/737] Generating lib/rte_node_mingw with a custom command 00:02:38.440 [555/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:38.440 [556/737] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:38.440 [557/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:38.440 [558/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:38.440 [559/737] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.440 [560/737] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:38.698 [561/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.698 [562/737] Linking target lib/librte_graph.so.23.0 00:02:38.698 [563/737] Generating drivers/rte_bus_pci_def with a custom command 00:02:38.698 [564/737] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:38.698 [565/737] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:38.698 [566/737] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:38.698 [567/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:38.698 [568/737] Generating drivers/rte_bus_vdev_def with a custom command 00:02:38.698 [569/737] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:38.698 [570/737] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:38.698 [571/737] Generating drivers/rte_mempool_ring_def with a custom command 00:02:38.698 [572/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:38.698 [573/737] Linking static target lib/librte_node.a 00:02:38.698 [574/737] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:38.956 [575/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.956 [576/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.956 [577/737] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.956 [578/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.956 [579/737] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.956 [580/737] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.214 [581/737] Linking target lib/librte_node.so.23.0 00:02:39.214 [582/737] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:39.214 [583/737] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:39.214 [584/737] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.214 [585/737] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.214 [586/737] Linking static target drivers/librte_bus_pci.a 00:02:39.214 [587/737] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.214 [588/737] Linking static target drivers/librte_bus_vdev.a 00:02:39.472 [589/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:39.472 [590/737] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.472 [591/737] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.730 [592/737] Linking target drivers/librte_bus_vdev.so.23.0 00:02:39.730 [593/737] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.730 [594/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:39.730 [595/737] Linking target drivers/librte_bus_pci.so.23.0 00:02:39.730 [596/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:39.730 [597/737] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:39.988 [598/737] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:39.988 [599/737] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:39.988 [600/737] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.988 [601/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:39.988 [602/737] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:40.246 [603/737] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.246 [604/737] Linking static target drivers/librte_mempool_ring.a 00:02:40.247 [605/737] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.247 [606/737] Linking target drivers/librte_mempool_ring.so.23.0 00:02:40.504 [607/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:40.762 [608/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:41.329 [609/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:41.329 [610/737] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:41.329 [611/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:41.587 [612/737] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:41.587 [613/737] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:41.846 [614/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:41.846 [615/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:42.104 [616/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:42.362 [617/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:42.362 [618/737] Generating drivers/rte_net_i40e_def with a custom command 00:02:42.362 [619/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:42.362 [620/737] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:43.296 [621/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:43.553 [622/737] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:43.553 [623/737] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:43.811 [624/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:43.811 [625/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:43.811 [626/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:43.811 [627/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:43.811 [628/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:43.811 [629/737] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:44.069 [630/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:44.327 [631/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:44.585 [632/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:44.843 [633/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:44.843 [634/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:45.101 [635/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:45.101 [636/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:45.101 [637/737] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:45.101 [638/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:45.101 [639/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:45.359 [640/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:45.359 [641/737] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:45.359 [642/737] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.616 [643/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:45.616 [644/737] Linking static target drivers/librte_net_i40e.a 00:02:45.617 [645/737] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.617 [646/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:45.874 [647/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:45.874 [648/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:46.132 [649/737] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.132 [650/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:46.390 [651/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:46.390 [652/737] Linking target drivers/librte_net_i40e.so.23.0 00:02:46.390 [653/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:46.390 [654/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:46.390 [655/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:46.390 [656/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:46.648 [657/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:46.648 [658/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:46.906 [659/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:46.906 [660/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:46.906 [661/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:47.164 [662/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:47.164 [663/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:47.423 [664/737] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.423 [665/737] Linking static target lib/librte_vhost.a 00:02:47.423 [666/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:47.990 [667/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:47.990 [668/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:48.247 [669/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:48.247 [670/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:48.505 [671/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:48.505 [672/737] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:48.505 [673/737] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.505 [674/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:48.762 [675/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:48.762 [676/737] Linking target lib/librte_vhost.so.23.0 00:02:48.762 [677/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:49.020 [678/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:49.020 [679/737] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:49.020 [680/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:49.020 [681/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:49.314 [682/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:49.314 [683/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:49.314 [684/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:49.571 [685/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:49.571 [686/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:49.571 [687/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:49.571 [688/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:49.829 [689/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:49.829 [690/737] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:50.087 [691/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:50.087 [692/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:50.345 [693/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:50.604 [694/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:50.604 [695/737] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:50.863 [696/737] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:50.863 [697/737] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:51.122 [698/737] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:51.381 [699/737] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:51.381 [700/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:51.381 [701/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:51.640 [702/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:51.640 [703/737] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:51.898 [704/737] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:51.898 [705/737] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:52.157 [706/737] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:52.415 [707/737] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:52.415 [708/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:52.674 [709/737] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:52.674 [710/737] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:52.932 [711/737] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:52.932 [712/737] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:52.932 [713/737] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:53.213 [714/737] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:53.213 [715/737] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:53.471 [716/737] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:53.730 [717/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:53.730 [718/737] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:53.730 [719/737] Linking static target lib/librte_pipeline.a 00:02:53.990 [720/737] Linking target app/dpdk-test-fib 00:02:53.990 [721/737] Linking target app/dpdk-proc-info 00:02:53.990 [722/737] Linking target app/dpdk-pdump 00:02:53.990 [723/737] Linking target app/dpdk-test-crypto-perf 00:02:53.990 [724/737] Linking target app/dpdk-test-eventdev 00:02:53.990 [725/737] Linking target app/dpdk-test-acl 00:02:54.248 [726/737] Linking target app/dpdk-test-cmdline 00:02:54.248 [727/737] Linking target app/dpdk-test-compress-perf 00:02:54.248 [728/737] Linking target app/dpdk-test-bbdev 00:02:54.507 [729/737] Linking target app/dpdk-test-regex 00:02:54.507 [730/737] Linking target app/dpdk-test-flow-perf 00:02:54.507 [731/737] Linking target app/dpdk-test-gpudev 00:02:54.507 [732/737] Linking target app/dpdk-test-pipeline 00:02:54.507 [733/737] Linking target app/dpdk-test-sad 00:02:54.507 [734/737] Linking target app/dpdk-testpmd 00:02:54.507 [735/737] Linking target app/dpdk-test-security-perf 00:02:58.697 [736/737] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.697 [737/737] Linking target lib/librte_pipeline.so.23.0 00:02:58.697 14:56:53 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:58.697 14:56:53 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:58.697 14:56:53 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:58.697 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:58.697 [0/1] Installing files. 00:02:58.697 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.697 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.698 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.699 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.700 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.702 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.702 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.963 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.963 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.963 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.963 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.963 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.963 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.964 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.964 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.964 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.964 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.964 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.964 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.965 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.966 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:58.967 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:58.967 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:58.967 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:58.967 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:58.967 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:58.967 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:58.967 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:58.967 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:58.967 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:58.967 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:58.967 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:58.967 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:58.967 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:58.967 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:58.967 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:58.967 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:58.967 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:58.967 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:58.967 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:58.967 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:58.967 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:58.967 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:58.967 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:58.967 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:58.967 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:58.967 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:58.967 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:58.967 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:58.967 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:58.967 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:58.967 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:58.967 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:58.967 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:58.967 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:58.967 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:58.967 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:58.967 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:58.967 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:58.967 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:58.967 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:58.968 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:58.968 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:58.968 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:58.968 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:58.968 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:58.968 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:58.968 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:58.968 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:58.968 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:58.968 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:58.968 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:58.968 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:58.968 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:58.968 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:58.968 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:59.227 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:59.227 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:59.227 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:59.227 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:59.227 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:59.227 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:59.227 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:59.227 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:59.227 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:59.227 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:59.227 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:59.227 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:59.227 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:59.227 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:59.227 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:59.227 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:59.227 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:59.227 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:59.227 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:59.227 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:59.227 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:59.227 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:59.227 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:59.227 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:59.227 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:59.227 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:59.227 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:59.227 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:59.227 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:59.227 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:59.227 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:59.227 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:59.227 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:59.227 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:59.227 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:59.227 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:59.227 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:59.227 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:59.227 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:59.227 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:59.227 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:59.227 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:59.227 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:59.227 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:59.227 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:59.227 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:59.227 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:59.227 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:59.227 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:59.227 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:59.227 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:59.227 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:59.227 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:59.227 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:59.227 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:59.227 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:59.227 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:59.227 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:59.227 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:59.227 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:59.227 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:59.227 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:59.227 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:59.227 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:59.227 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:59.227 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:59.228 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:59.228 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:59.228 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:59.228 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:59.228 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:59.228 14:56:54 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:59.228 14:56:54 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.228 00:02:59.228 real 0m51.191s 00:02:59.228 user 5m24.279s 00:02:59.228 sys 1m8.619s 00:02:59.228 14:56:54 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:59.228 14:56:54 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:59.228 ************************************ 00:02:59.228 END TEST build_native_dpdk 00:02:59.228 ************************************ 00:02:59.228 14:56:54 -- common/autotest_common.sh@1142 -- $ return 0 00:02:59.228 14:56:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.228 14:56:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.228 14:56:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.228 14:56:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.228 14:56:54 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:59.228 14:56:54 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:59.228 14:56:54 -- common/autobuild_common.sh@423 -- $ run_test unittest_build _unittest_build 00:02:59.228 14:56:54 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:59.228 14:56:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:59.228 14:56:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.228 ************************************ 00:02:59.228 START TEST unittest_build 00:02:59.228 ************************************ 00:02:59.228 14:56:54 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:02:59.228 14:56:54 unittest_build -- common/autobuild_common.sh@414 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:02:59.228 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:59.488 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.488 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:59.488 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.747 Using 'verbs' RDMA provider 00:03:15.655 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:30.574 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:30.574 Creating mk/config.mk...done. 00:03:30.574 Creating mk/cc.flags.mk...done. 00:03:30.574 Type 'make' to build. 00:03:30.574 14:57:24 unittest_build -- common/autobuild_common.sh@415 -- $ make -j10 00:03:30.574 make[1]: Nothing to be done for 'all'. 00:03:48.653 CC lib/ut_mock/mock.o 00:03:48.653 CC lib/log/log.o 00:03:48.653 CC lib/log/log_deprecated.o 00:03:48.653 CC lib/log/log_flags.o 00:03:48.653 CC lib/ut/ut.o 00:03:48.653 LIB libspdk_log.a 00:03:48.653 LIB libspdk_ut_mock.a 00:03:48.653 LIB libspdk_ut.a 00:03:48.653 CC lib/ioat/ioat.o 00:03:48.653 CC lib/util/base64.o 00:03:48.653 CC lib/util/bit_array.o 00:03:48.653 CC lib/util/cpuset.o 00:03:48.653 CC lib/dma/dma.o 00:03:48.653 CC lib/util/crc16.o 00:03:48.653 CC lib/util/crc32.o 00:03:48.653 CC lib/util/crc32c.o 00:03:48.653 CXX lib/trace_parser/trace.o 00:03:48.653 CC lib/vfio_user/host/vfio_user_pci.o 00:03:48.653 CC lib/util/crc32_ieee.o 00:03:48.653 CC lib/util/crc64.o 00:03:48.653 CC lib/util/dif.o 00:03:48.653 CC lib/util/fd.o 00:03:48.653 LIB libspdk_dma.a 00:03:48.653 CC lib/util/fd_group.o 00:03:48.653 CC lib/util/file.o 00:03:48.653 CC lib/util/hexlify.o 00:03:48.653 CC lib/util/iov.o 00:03:48.653 CC lib/util/math.o 00:03:48.653 LIB libspdk_ioat.a 00:03:48.653 CC lib/util/net.o 00:03:48.653 CC lib/vfio_user/host/vfio_user.o 00:03:48.653 CC lib/util/pipe.o 00:03:48.653 CC lib/util/strerror_tls.o 00:03:48.653 CC lib/util/string.o 00:03:48.653 CC lib/util/uuid.o 00:03:48.653 CC lib/util/xor.o 00:03:48.653 CC lib/util/zipf.o 00:03:48.653 LIB libspdk_vfio_user.a 00:03:48.653 LIB libspdk_util.a 00:03:48.653 CC lib/conf/conf.o 00:03:48.653 CC lib/rdma_provider/common.o 00:03:48.653 CC lib/idxd/idxd.o 00:03:48.653 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:48.653 CC lib/idxd/idxd_user.o 00:03:48.653 CC lib/rdma_utils/rdma_utils.o 00:03:48.653 CC lib/env_dpdk/env.o 00:03:48.653 CC lib/vmd/vmd.o 00:03:48.653 CC lib/json/json_parse.o 00:03:48.653 CC lib/json/json_util.o 00:03:48.653 LIB libspdk_trace_parser.a 00:03:48.653 LIB libspdk_rdma_provider.a 00:03:48.653 CC lib/vmd/led.o 00:03:48.653 CC lib/idxd/idxd_kernel.o 00:03:48.653 LIB libspdk_conf.a 00:03:48.653 CC lib/env_dpdk/memory.o 00:03:48.653 CC lib/env_dpdk/pci.o 00:03:48.653 CC lib/env_dpdk/init.o 00:03:48.653 LIB libspdk_rdma_utils.a 00:03:48.653 CC lib/env_dpdk/threads.o 00:03:48.653 CC lib/env_dpdk/pci_ioat.o 00:03:48.653 CC lib/env_dpdk/pci_virtio.o 00:03:48.653 CC lib/json/json_write.o 00:03:48.653 CC lib/env_dpdk/pci_vmd.o 00:03:48.653 CC lib/env_dpdk/pci_idxd.o 00:03:48.653 CC lib/env_dpdk/pci_event.o 00:03:48.653 CC lib/env_dpdk/sigbus_handler.o 00:03:48.912 LIB libspdk_idxd.a 00:03:48.912 CC lib/env_dpdk/pci_dpdk.o 00:03:48.912 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:48.912 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:48.912 LIB libspdk_vmd.a 00:03:48.912 LIB libspdk_json.a 00:03:49.171 CC lib/jsonrpc/jsonrpc_server.o 00:03:49.171 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:49.171 CC lib/jsonrpc/jsonrpc_client.o 00:03:49.171 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.430 LIB libspdk_jsonrpc.a 00:03:49.688 LIB libspdk_env_dpdk.a 00:03:49.688 CC lib/rpc/rpc.o 00:03:50.255 LIB libspdk_rpc.a 00:03:50.255 CC lib/keyring/keyring_rpc.o 00:03:50.255 CC lib/keyring/keyring.o 00:03:50.255 CC lib/notify/notify.o 00:03:50.255 CC lib/trace/trace_flags.o 00:03:50.255 CC lib/notify/notify_rpc.o 00:03:50.255 CC lib/trace/trace.o 00:03:50.255 CC lib/trace/trace_rpc.o 00:03:50.514 LIB libspdk_notify.a 00:03:50.773 LIB libspdk_trace.a 00:03:50.773 LIB libspdk_keyring.a 00:03:51.032 CC lib/sock/sock_rpc.o 00:03:51.032 CC lib/sock/sock.o 00:03:51.032 CC lib/thread/iobuf.o 00:03:51.032 CC lib/thread/thread.o 00:03:51.600 LIB libspdk_sock.a 00:03:51.869 CC lib/nvme/nvme_ctrlr.o 00:03:51.869 CC lib/nvme/nvme_ns_cmd.o 00:03:51.869 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:51.869 CC lib/nvme/nvme_fabric.o 00:03:51.869 CC lib/nvme/nvme_ns.o 00:03:51.869 CC lib/nvme/nvme_pcie.o 00:03:51.869 CC lib/nvme/nvme_pcie_common.o 00:03:51.869 CC lib/nvme/nvme.o 00:03:51.869 CC lib/nvme/nvme_qpair.o 00:03:52.820 CC lib/nvme/nvme_quirks.o 00:03:52.820 CC lib/nvme/nvme_transport.o 00:03:52.820 CC lib/nvme/nvme_discovery.o 00:03:52.820 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:52.820 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:52.820 LIB libspdk_thread.a 00:03:52.820 CC lib/nvme/nvme_tcp.o 00:03:53.079 CC lib/nvme/nvme_opal.o 00:03:53.079 CC lib/nvme/nvme_io_msg.o 00:03:53.079 CC lib/nvme/nvme_poll_group.o 00:03:53.338 CC lib/nvme/nvme_zns.o 00:03:53.338 CC lib/nvme/nvme_stubs.o 00:03:53.338 CC lib/nvme/nvme_auth.o 00:03:53.338 CC lib/nvme/nvme_cuse.o 00:03:53.597 CC lib/nvme/nvme_rdma.o 00:03:53.856 CC lib/accel/accel.o 00:03:53.856 CC lib/accel/accel_rpc.o 00:03:53.856 CC lib/blob/blobstore.o 00:03:53.856 CC lib/init/json_config.o 00:03:54.115 CC lib/virtio/virtio.o 00:03:54.115 CC lib/init/subsystem.o 00:03:54.374 CC lib/init/subsystem_rpc.o 00:03:54.374 CC lib/virtio/virtio_vhost_user.o 00:03:54.374 CC lib/virtio/virtio_vfio_user.o 00:03:54.374 CC lib/accel/accel_sw.o 00:03:54.374 CC lib/init/rpc.o 00:03:54.374 CC lib/blob/request.o 00:03:54.374 CC lib/blob/zeroes.o 00:03:54.633 LIB libspdk_init.a 00:03:54.633 CC lib/blob/blob_bs_dev.o 00:03:54.633 CC lib/virtio/virtio_pci.o 00:03:54.891 CC lib/event/log_rpc.o 00:03:54.891 CC lib/event/app.o 00:03:54.891 CC lib/event/reactor.o 00:03:54.891 CC lib/event/scheduler_static.o 00:03:54.891 CC lib/event/app_rpc.o 00:03:54.891 LIB libspdk_virtio.a 00:03:55.150 LIB libspdk_nvme.a 00:03:55.150 LIB libspdk_accel.a 00:03:55.408 LIB libspdk_event.a 00:03:55.408 CC lib/bdev/bdev_rpc.o 00:03:55.408 CC lib/bdev/bdev_zone.o 00:03:55.408 CC lib/bdev/bdev.o 00:03:55.408 CC lib/bdev/part.o 00:03:55.408 CC lib/bdev/scsi_nvme.o 00:03:57.942 LIB libspdk_blob.a 00:03:58.508 CC lib/blobfs/blobfs.o 00:03:58.508 CC lib/blobfs/tree.o 00:03:58.508 CC lib/lvol/lvol.o 00:03:58.508 LIB libspdk_bdev.a 00:03:58.766 CC lib/ublk/ublk.o 00:03:58.766 CC lib/ublk/ublk_rpc.o 00:03:58.766 CC lib/nbd/nbd.o 00:03:58.766 CC lib/nbd/nbd_rpc.o 00:03:58.766 CC lib/nvmf/ctrlr.o 00:03:58.766 CC lib/scsi/dev.o 00:03:58.766 CC lib/ftl/ftl_core.o 00:03:58.766 CC lib/nvmf/ctrlr_discovery.o 00:03:59.025 CC lib/scsi/lun.o 00:03:59.025 CC lib/ftl/ftl_init.o 00:03:59.025 CC lib/ftl/ftl_layout.o 00:03:59.283 CC lib/ftl/ftl_debug.o 00:03:59.283 CC lib/ftl/ftl_io.o 00:03:59.539 LIB libspdk_blobfs.a 00:03:59.539 LIB libspdk_nbd.a 00:03:59.539 CC lib/scsi/port.o 00:03:59.539 CC lib/scsi/scsi.o 00:03:59.539 CC lib/ftl/ftl_sb.o 00:03:59.539 CC lib/ftl/ftl_l2p.o 00:03:59.539 CC lib/nvmf/ctrlr_bdev.o 00:03:59.539 LIB libspdk_lvol.a 00:03:59.539 CC lib/nvmf/subsystem.o 00:03:59.539 LIB libspdk_ublk.a 00:03:59.539 CC lib/scsi/scsi_bdev.o 00:03:59.539 CC lib/scsi/scsi_pr.o 00:03:59.539 CC lib/nvmf/nvmf.o 00:03:59.539 CC lib/ftl/ftl_l2p_flat.o 00:03:59.796 CC lib/scsi/scsi_rpc.o 00:03:59.796 CC lib/scsi/task.o 00:03:59.796 CC lib/ftl/ftl_nv_cache.o 00:03:59.796 CC lib/nvmf/nvmf_rpc.o 00:03:59.796 CC lib/ftl/ftl_band.o 00:04:00.053 CC lib/ftl/ftl_band_ops.o 00:04:00.053 CC lib/ftl/ftl_writer.o 00:04:00.309 LIB libspdk_scsi.a 00:04:00.309 CC lib/ftl/ftl_rq.o 00:04:00.309 CC lib/ftl/ftl_reloc.o 00:04:00.566 CC lib/ftl/ftl_l2p_cache.o 00:04:00.566 CC lib/nvmf/transport.o 00:04:00.566 CC lib/iscsi/conn.o 00:04:00.566 CC lib/iscsi/init_grp.o 00:04:00.824 CC lib/iscsi/iscsi.o 00:04:00.824 CC lib/iscsi/md5.o 00:04:00.824 CC lib/iscsi/param.o 00:04:00.824 CC lib/nvmf/tcp.o 00:04:01.082 CC lib/iscsi/portal_grp.o 00:04:01.083 CC lib/iscsi/tgt_node.o 00:04:01.083 CC lib/ftl/ftl_p2l.o 00:04:01.341 CC lib/nvmf/stubs.o 00:04:01.341 CC lib/iscsi/iscsi_subsystem.o 00:04:01.341 CC lib/nvmf/mdns_server.o 00:04:01.341 CC lib/nvmf/rdma.o 00:04:01.341 CC lib/iscsi/iscsi_rpc.o 00:04:01.600 CC lib/vhost/vhost.o 00:04:01.600 CC lib/ftl/mngt/ftl_mngt.o 00:04:01.600 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:01.858 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:01.858 CC lib/iscsi/task.o 00:04:01.858 CC lib/vhost/vhost_rpc.o 00:04:01.858 CC lib/vhost/vhost_scsi.o 00:04:01.858 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:01.858 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:01.858 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.117 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.117 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.117 CC lib/vhost/vhost_blk.o 00:04:02.375 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.375 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:02.375 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:02.375 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:02.634 CC lib/vhost/rte_vhost_user.o 00:04:02.634 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:02.635 CC lib/ftl/utils/ftl_conf.o 00:04:02.635 LIB libspdk_iscsi.a 00:04:02.635 CC lib/ftl/utils/ftl_md.o 00:04:02.635 CC lib/ftl/utils/ftl_mempool.o 00:04:02.955 CC lib/ftl/utils/ftl_bitmap.o 00:04:02.955 CC lib/ftl/utils/ftl_property.o 00:04:02.955 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:02.955 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:02.955 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:02.955 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:02.955 CC lib/nvmf/auth.o 00:04:03.240 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:03.240 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:03.240 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:03.240 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.240 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.240 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.240 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.240 CC lib/ftl/base/ftl_base_dev.o 00:04:03.240 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.498 CC lib/ftl/ftl_trace.o 00:04:03.755 LIB libspdk_ftl.a 00:04:03.755 LIB libspdk_vhost.a 00:04:04.014 LIB libspdk_nvmf.a 00:04:04.581 CC module/env_dpdk/env_dpdk_rpc.o 00:04:04.581 CC module/sock/posix/posix.o 00:04:04.581 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:04.581 CC module/scheduler/gscheduler/gscheduler.o 00:04:04.581 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:04.581 CC module/accel/ioat/accel_ioat.o 00:04:04.581 CC module/accel/error/accel_error.o 00:04:04.581 CC module/keyring/file/keyring.o 00:04:04.581 CC module/keyring/linux/keyring.o 00:04:04.581 CC module/blob/bdev/blob_bdev.o 00:04:04.581 LIB libspdk_env_dpdk_rpc.a 00:04:04.839 CC module/keyring/linux/keyring_rpc.o 00:04:04.839 LIB libspdk_scheduler_gscheduler.a 00:04:04.839 LIB libspdk_scheduler_dpdk_governor.a 00:04:04.839 CC module/accel/error/accel_error_rpc.o 00:04:04.839 CC module/keyring/file/keyring_rpc.o 00:04:04.839 CC module/accel/ioat/accel_ioat_rpc.o 00:04:04.839 LIB libspdk_scheduler_dynamic.a 00:04:04.840 LIB libspdk_keyring_linux.a 00:04:05.099 LIB libspdk_accel_error.a 00:04:05.099 LIB libspdk_blob_bdev.a 00:04:05.099 LIB libspdk_accel_ioat.a 00:04:05.099 LIB libspdk_keyring_file.a 00:04:05.099 CC module/accel/dsa/accel_dsa.o 00:04:05.099 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.099 CC module/accel/iaa/accel_iaa.o 00:04:05.099 CC module/accel/iaa/accel_iaa_rpc.o 00:04:05.358 CC module/bdev/gpt/gpt.o 00:04:05.358 CC module/bdev/delay/vbdev_delay.o 00:04:05.358 CC module/bdev/error/vbdev_error.o 00:04:05.358 CC module/bdev/lvol/vbdev_lvol.o 00:04:05.358 LIB libspdk_accel_iaa.a 00:04:05.358 CC module/blobfs/bdev/blobfs_bdev.o 00:04:05.358 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:05.358 LIB libspdk_accel_dsa.a 00:04:05.358 CC module/bdev/malloc/bdev_malloc.o 00:04:05.358 CC module/bdev/null/bdev_null.o 00:04:05.358 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:05.358 CC module/bdev/null/bdev_null_rpc.o 00:04:05.618 LIB libspdk_blobfs_bdev.a 00:04:05.618 CC module/bdev/gpt/vbdev_gpt.o 00:04:05.619 LIB libspdk_sock_posix.a 00:04:05.619 CC module/bdev/error/vbdev_error_rpc.o 00:04:05.619 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:05.619 LIB libspdk_bdev_null.a 00:04:05.619 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:05.619 CC module/bdev/passthru/vbdev_passthru.o 00:04:05.619 CC module/bdev/nvme/bdev_nvme.o 00:04:05.878 LIB libspdk_bdev_malloc.a 00:04:05.878 CC module/bdev/raid/bdev_raid.o 00:04:05.878 LIB libspdk_bdev_error.a 00:04:05.878 CC module/bdev/raid/bdev_raid_rpc.o 00:04:05.878 CC module/bdev/raid/bdev_raid_sb.o 00:04:05.878 LIB libspdk_bdev_gpt.a 00:04:05.878 CC module/bdev/raid/raid0.o 00:04:05.878 CC module/bdev/split/vbdev_split.o 00:04:05.878 LIB libspdk_bdev_delay.a 00:04:05.878 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:06.137 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:06.137 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:06.137 CC module/bdev/raid/raid1.o 00:04:06.137 LIB libspdk_bdev_lvol.a 00:04:06.137 CC module/bdev/split/vbdev_split_rpc.o 00:04:06.137 CC module/bdev/raid/concat.o 00:04:06.137 CC module/bdev/raid/raid5f.o 00:04:06.137 LIB libspdk_bdev_passthru.a 00:04:06.396 CC module/bdev/aio/bdev_aio.o 00:04:06.396 LIB libspdk_bdev_split.a 00:04:06.396 CC module/bdev/ftl/bdev_ftl.o 00:04:06.396 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:06.396 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:06.396 CC module/bdev/iscsi/bdev_iscsi.o 00:04:06.657 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:06.657 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:06.657 LIB libspdk_bdev_zone_block.a 00:04:06.657 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:06.657 LIB libspdk_bdev_ftl.a 00:04:06.657 CC module/bdev/aio/bdev_aio_rpc.o 00:04:06.657 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:06.657 CC module/bdev/nvme/nvme_rpc.o 00:04:06.916 CC module/bdev/nvme/bdev_mdns_client.o 00:04:06.916 CC module/bdev/nvme/vbdev_opal.o 00:04:06.916 LIB libspdk_bdev_aio.a 00:04:06.916 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:06.916 LIB libspdk_bdev_iscsi.a 00:04:06.916 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:06.916 LIB libspdk_bdev_raid.a 00:04:07.175 LIB libspdk_bdev_virtio.a 00:04:08.553 LIB libspdk_bdev_nvme.a 00:04:08.813 CC module/event/subsystems/iobuf/iobuf.o 00:04:08.813 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:08.813 CC module/event/subsystems/sock/sock.o 00:04:08.813 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:08.813 CC module/event/subsystems/scheduler/scheduler.o 00:04:08.813 CC module/event/subsystems/keyring/keyring.o 00:04:08.813 CC module/event/subsystems/vmd/vmd.o 00:04:08.813 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:09.072 LIB libspdk_event_keyring.a 00:04:09.072 LIB libspdk_event_vhost_blk.a 00:04:09.072 LIB libspdk_event_iobuf.a 00:04:09.072 LIB libspdk_event_sock.a 00:04:09.072 LIB libspdk_event_scheduler.a 00:04:09.072 LIB libspdk_event_vmd.a 00:04:09.331 CC module/event/subsystems/accel/accel.o 00:04:09.590 LIB libspdk_event_accel.a 00:04:09.849 CC module/event/subsystems/bdev/bdev.o 00:04:10.109 LIB libspdk_event_bdev.a 00:04:10.368 CC module/event/subsystems/nbd/nbd.o 00:04:10.368 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:10.368 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:10.368 CC module/event/subsystems/scsi/scsi.o 00:04:10.368 CC module/event/subsystems/ublk/ublk.o 00:04:10.627 LIB libspdk_event_nbd.a 00:04:10.627 LIB libspdk_event_ublk.a 00:04:10.627 LIB libspdk_event_scsi.a 00:04:10.627 LIB libspdk_event_nvmf.a 00:04:10.886 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:10.886 CC module/event/subsystems/iscsi/iscsi.o 00:04:11.146 LIB libspdk_event_vhost_scsi.a 00:04:11.146 LIB libspdk_event_iscsi.a 00:04:11.405 CC app/trace_record/trace_record.o 00:04:11.405 CXX app/trace/trace.o 00:04:11.405 CC app/spdk_lspci/spdk_lspci.o 00:04:11.405 CC app/spdk_nvme_identify/identify.o 00:04:11.405 CC app/spdk_nvme_perf/perf.o 00:04:11.405 CC app/iscsi_tgt/iscsi_tgt.o 00:04:11.405 CC app/nvmf_tgt/nvmf_main.o 00:04:11.405 CC app/spdk_tgt/spdk_tgt.o 00:04:11.405 CC examples/util/zipf/zipf.o 00:04:11.405 CC test/thread/poller_perf/poller_perf.o 00:04:11.665 LINK spdk_lspci 00:04:11.665 LINK nvmf_tgt 00:04:11.665 LINK zipf 00:04:11.665 LINK iscsi_tgt 00:04:11.665 LINK spdk_trace_record 00:04:11.665 LINK poller_perf 00:04:11.665 LINK spdk_tgt 00:04:11.925 LINK spdk_trace 00:04:12.184 CC test/thread/lock/spdk_lock.o 00:04:12.184 CC examples/ioat/perf/perf.o 00:04:12.184 CC examples/vmd/lsvmd/lsvmd.o 00:04:12.442 CC examples/vmd/led/led.o 00:04:12.442 LINK spdk_nvme_perf 00:04:12.442 LINK lsvmd 00:04:12.442 LINK ioat_perf 00:04:12.442 LINK led 00:04:12.700 LINK spdk_nvme_identify 00:04:12.700 CC app/spdk_nvme_discover/discovery_aer.o 00:04:12.958 LINK spdk_nvme_discover 00:04:12.958 CC examples/ioat/verify/verify.o 00:04:13.216 LINK verify 00:04:13.474 CC app/spdk_top/spdk_top.o 00:04:13.474 CC app/vhost/vhost.o 00:04:13.770 CC examples/idxd/perf/perf.o 00:04:13.770 CC test/dma/test_dma/test_dma.o 00:04:13.770 LINK vhost 00:04:13.770 CC app/spdk_dd/spdk_dd.o 00:04:13.770 CC app/fio/nvme/fio_plugin.o 00:04:13.770 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:14.039 CC test/app/bdev_svc/bdev_svc.o 00:04:14.039 LINK idxd_perf 00:04:14.039 LINK interrupt_tgt 00:04:14.039 LINK test_dma 00:04:14.298 LINK bdev_svc 00:04:14.298 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.298 LINK spdk_lock 00:04:14.298 LINK spdk_dd 00:04:14.298 LINK spdk_top 00:04:14.556 LINK spdk_nvme 00:04:14.556 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:14.814 LINK nvme_fuzz 00:04:14.814 CC app/fio/bdev/fio_plugin.o 00:04:15.072 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:15.072 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:15.331 CC examples/thread/thread/thread_ex.o 00:04:15.589 LINK spdk_bdev 00:04:15.589 CC examples/sock/hello_world/hello_sock.o 00:04:15.589 LINK vhost_fuzz 00:04:15.589 LINK thread 00:04:15.589 CC test/app/histogram_perf/histogram_perf.o 00:04:15.846 CC test/app/jsoncat/jsoncat.o 00:04:15.846 LINK histogram_perf 00:04:15.846 LINK jsoncat 00:04:15.846 LINK hello_sock 00:04:16.105 TEST_HEADER include/spdk/accel.h 00:04:16.105 TEST_HEADER include/spdk/accel_module.h 00:04:16.105 TEST_HEADER include/spdk/assert.h 00:04:16.105 TEST_HEADER include/spdk/barrier.h 00:04:16.105 TEST_HEADER include/spdk/base64.h 00:04:16.105 TEST_HEADER include/spdk/bdev.h 00:04:16.105 TEST_HEADER include/spdk/bdev_module.h 00:04:16.105 TEST_HEADER include/spdk/bdev_zone.h 00:04:16.105 TEST_HEADER include/spdk/bit_array.h 00:04:16.105 TEST_HEADER include/spdk/bit_pool.h 00:04:16.105 TEST_HEADER include/spdk/blob.h 00:04:16.105 TEST_HEADER include/spdk/blob_bdev.h 00:04:16.105 TEST_HEADER include/spdk/blobfs.h 00:04:16.105 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:16.105 TEST_HEADER include/spdk/conf.h 00:04:16.105 TEST_HEADER include/spdk/config.h 00:04:16.105 TEST_HEADER include/spdk/cpuset.h 00:04:16.105 TEST_HEADER include/spdk/crc32.h 00:04:16.105 TEST_HEADER include/spdk/crc16.h 00:04:16.105 TEST_HEADER include/spdk/crc64.h 00:04:16.105 TEST_HEADER include/spdk/dif.h 00:04:16.105 TEST_HEADER include/spdk/dma.h 00:04:16.105 TEST_HEADER include/spdk/endian.h 00:04:16.105 TEST_HEADER include/spdk/env.h 00:04:16.105 TEST_HEADER include/spdk/env_dpdk.h 00:04:16.105 TEST_HEADER include/spdk/event.h 00:04:16.105 TEST_HEADER include/spdk/fd.h 00:04:16.105 TEST_HEADER include/spdk/fd_group.h 00:04:16.105 TEST_HEADER include/spdk/file.h 00:04:16.105 TEST_HEADER include/spdk/ftl.h 00:04:16.105 TEST_HEADER include/spdk/gpt_spec.h 00:04:16.105 TEST_HEADER include/spdk/hexlify.h 00:04:16.105 TEST_HEADER include/spdk/histogram_data.h 00:04:16.105 TEST_HEADER include/spdk/idxd.h 00:04:16.105 TEST_HEADER include/spdk/idxd_spec.h 00:04:16.105 TEST_HEADER include/spdk/init.h 00:04:16.105 TEST_HEADER include/spdk/ioat.h 00:04:16.105 TEST_HEADER include/spdk/ioat_spec.h 00:04:16.105 TEST_HEADER include/spdk/iscsi_spec.h 00:04:16.105 TEST_HEADER include/spdk/json.h 00:04:16.105 TEST_HEADER include/spdk/jsonrpc.h 00:04:16.105 TEST_HEADER include/spdk/keyring.h 00:04:16.105 TEST_HEADER include/spdk/keyring_module.h 00:04:16.105 TEST_HEADER include/spdk/likely.h 00:04:16.105 TEST_HEADER include/spdk/log.h 00:04:16.105 TEST_HEADER include/spdk/lvol.h 00:04:16.105 TEST_HEADER include/spdk/memory.h 00:04:16.105 TEST_HEADER include/spdk/mmio.h 00:04:16.105 TEST_HEADER include/spdk/nbd.h 00:04:16.105 TEST_HEADER include/spdk/net.h 00:04:16.105 TEST_HEADER include/spdk/notify.h 00:04:16.105 TEST_HEADER include/spdk/nvme.h 00:04:16.105 TEST_HEADER include/spdk/nvme_intel.h 00:04:16.105 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:16.105 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:16.105 TEST_HEADER include/spdk/nvme_spec.h 00:04:16.105 TEST_HEADER include/spdk/nvme_zns.h 00:04:16.105 TEST_HEADER include/spdk/nvmf.h 00:04:16.105 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:16.105 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:16.105 TEST_HEADER include/spdk/nvmf_spec.h 00:04:16.105 TEST_HEADER include/spdk/nvmf_transport.h 00:04:16.105 TEST_HEADER include/spdk/opal.h 00:04:16.105 TEST_HEADER include/spdk/opal_spec.h 00:04:16.105 TEST_HEADER include/spdk/pci_ids.h 00:04:16.105 TEST_HEADER include/spdk/pipe.h 00:04:16.105 TEST_HEADER include/spdk/queue.h 00:04:16.105 TEST_HEADER include/spdk/reduce.h 00:04:16.105 TEST_HEADER include/spdk/rpc.h 00:04:16.105 TEST_HEADER include/spdk/scheduler.h 00:04:16.105 TEST_HEADER include/spdk/scsi.h 00:04:16.105 TEST_HEADER include/spdk/scsi_spec.h 00:04:16.105 TEST_HEADER include/spdk/sock.h 00:04:16.105 TEST_HEADER include/spdk/stdinc.h 00:04:16.105 TEST_HEADER include/spdk/string.h 00:04:16.363 TEST_HEADER include/spdk/thread.h 00:04:16.363 TEST_HEADER include/spdk/trace.h 00:04:16.363 TEST_HEADER include/spdk/trace_parser.h 00:04:16.363 TEST_HEADER include/spdk/tree.h 00:04:16.363 TEST_HEADER include/spdk/ublk.h 00:04:16.363 TEST_HEADER include/spdk/util.h 00:04:16.363 TEST_HEADER include/spdk/uuid.h 00:04:16.363 TEST_HEADER include/spdk/version.h 00:04:16.363 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:16.363 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:16.363 TEST_HEADER include/spdk/vhost.h 00:04:16.363 TEST_HEADER include/spdk/vmd.h 00:04:16.363 TEST_HEADER include/spdk/xor.h 00:04:16.363 TEST_HEADER include/spdk/zipf.h 00:04:16.363 CXX test/cpp_headers/accel.o 00:04:16.363 CC test/env/mem_callbacks/mem_callbacks.o 00:04:16.363 CC test/env/vtophys/vtophys.o 00:04:16.363 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:16.620 CC test/env/memory/memory_ut.o 00:04:16.620 CC test/event/event_perf/event_perf.o 00:04:16.620 CXX test/cpp_headers/accel_module.o 00:04:16.620 LINK mem_callbacks 00:04:16.620 CC test/event/reactor/reactor.o 00:04:16.620 LINK vtophys 00:04:16.620 LINK env_dpdk_post_init 00:04:16.878 LINK reactor 00:04:16.878 LINK iscsi_fuzz 00:04:17.136 CXX test/cpp_headers/assert.o 00:04:17.136 LINK event_perf 00:04:17.136 CC test/env/pci/pci_ut.o 00:04:17.393 CXX test/cpp_headers/barrier.o 00:04:17.651 CC test/event/reactor_perf/reactor_perf.o 00:04:17.651 CXX test/cpp_headers/base64.o 00:04:17.651 LINK memory_ut 00:04:17.651 CC test/event/app_repeat/app_repeat.o 00:04:17.651 LINK pci_ut 00:04:17.651 CXX test/cpp_headers/bdev.o 00:04:17.651 LINK reactor_perf 00:04:17.651 CXX test/cpp_headers/bdev_module.o 00:04:17.651 CC examples/nvme/hello_world/hello_world.o 00:04:17.910 LINK app_repeat 00:04:17.910 CXX test/cpp_headers/bdev_zone.o 00:04:17.910 CXX test/cpp_headers/bit_array.o 00:04:17.910 CC test/app/stub/stub.o 00:04:17.910 CC test/nvme/aer/aer.o 00:04:17.910 CC test/event/scheduler/scheduler.o 00:04:17.910 CXX test/cpp_headers/bit_pool.o 00:04:18.168 CXX test/cpp_headers/blob.o 00:04:18.168 LINK stub 00:04:18.168 LINK hello_world 00:04:18.168 CC test/nvme/reset/reset.o 00:04:18.168 CXX test/cpp_headers/blob_bdev.o 00:04:18.168 CC examples/nvme/reconnect/reconnect.o 00:04:18.168 LINK scheduler 00:04:18.168 CXX test/cpp_headers/blobfs.o 00:04:18.426 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.426 CXX test/cpp_headers/conf.o 00:04:18.426 LINK aer 00:04:18.426 LINK reset 00:04:18.426 CXX test/cpp_headers/config.o 00:04:18.426 CXX test/cpp_headers/cpuset.o 00:04:18.684 LINK reconnect 00:04:18.684 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:18.684 CC test/rpc_client/rpc_client_test.o 00:04:18.684 CXX test/cpp_headers/crc16.o 00:04:18.684 CXX test/cpp_headers/crc32.o 00:04:18.970 LINK rpc_client_test 00:04:18.970 CXX test/cpp_headers/crc64.o 00:04:18.970 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:18.970 CC examples/nvme/arbitration/arbitration.o 00:04:19.257 CC test/accel/dif/dif.o 00:04:19.257 CXX test/cpp_headers/dif.o 00:04:19.257 LINK nvme_manage 00:04:19.257 CXX test/cpp_headers/dma.o 00:04:19.257 LINK histogram_ut 00:04:19.257 CXX test/cpp_headers/endian.o 00:04:19.515 CC test/nvme/sgl/sgl.o 00:04:19.515 CXX test/cpp_headers/env.o 00:04:19.515 LINK arbitration 00:04:19.515 CXX test/cpp_headers/env_dpdk.o 00:04:19.515 CC test/blobfs/mkfs/mkfs.o 00:04:19.515 CC test/unit/lib/log/log.c/log_ut.o 00:04:19.515 CXX test/cpp_headers/event.o 00:04:19.775 CC test/lvol/esnap/esnap.o 00:04:19.775 CXX test/cpp_headers/fd.o 00:04:19.775 LINK dif 00:04:19.775 LINK sgl 00:04:19.775 LINK mkfs 00:04:19.775 CXX test/cpp_headers/fd_group.o 00:04:20.034 LINK log_ut 00:04:20.034 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:20.034 CXX test/cpp_headers/file.o 00:04:20.034 CXX test/cpp_headers/ftl.o 00:04:20.034 CXX test/cpp_headers/gpt_spec.o 00:04:20.034 CC test/nvme/e2edp/nvme_dp.o 00:04:20.293 CXX test/cpp_headers/hexlify.o 00:04:20.293 CXX test/cpp_headers/histogram_data.o 00:04:20.293 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:20.293 CC examples/nvme/hotplug/hotplug.o 00:04:20.552 LINK nvme_dp 00:04:20.552 CXX test/cpp_headers/idxd.o 00:04:20.552 LINK base64_ut 00:04:20.552 LINK hotplug 00:04:20.811 CXX test/cpp_headers/idxd_spec.o 00:04:20.811 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:20.811 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:20.811 LINK common_ut 00:04:20.811 CXX test/cpp_headers/init.o 00:04:20.811 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:21.070 CXX test/cpp_headers/ioat.o 00:04:21.070 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:21.328 CXX test/cpp_headers/ioat_spec.o 00:04:21.328 LINK cpuset_ut 00:04:21.328 CC test/nvme/overhead/overhead.o 00:04:21.328 LINK ioat_ut 00:04:21.328 CXX test/cpp_headers/iscsi_spec.o 00:04:21.586 LINK dma_ut 00:04:21.586 LINK bit_array_ut 00:04:21.586 CXX test/cpp_headers/json.o 00:04:21.586 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:21.586 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:21.586 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:21.845 LINK overhead 00:04:21.845 LINK crc16_ut 00:04:21.845 LINK crc32_ieee_ut 00:04:21.845 CC test/nvme/err_injection/err_injection.o 00:04:21.845 CXX test/cpp_headers/jsonrpc.o 00:04:21.845 CC test/bdev/bdevio/bdevio.o 00:04:21.845 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:21.845 CC examples/nvme/abort/abort.o 00:04:21.845 LINK cmb_copy 00:04:22.104 LINK err_injection 00:04:22.104 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:22.104 CXX test/cpp_headers/keyring.o 00:04:22.104 LINK crc32c_ut 00:04:22.104 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:22.104 LINK crc64_ut 00:04:22.104 CXX test/cpp_headers/keyring_module.o 00:04:22.363 LINK bdevio 00:04:22.364 CC test/unit/lib/util/file.c/file_ut.o 00:04:22.364 LINK abort 00:04:22.364 CXX test/cpp_headers/likely.o 00:04:22.364 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:22.622 LINK file_ut 00:04:22.622 CXX test/cpp_headers/log.o 00:04:22.622 LINK iov_ut 00:04:22.881 CC test/unit/lib/util/math.c/math_ut.o 00:04:22.881 CXX test/cpp_headers/lvol.o 00:04:22.881 CC test/unit/lib/util/net.c/net_ut.o 00:04:22.881 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:22.881 CC test/nvme/startup/startup.o 00:04:22.881 LINK math_ut 00:04:22.881 CXX test/cpp_headers/memory.o 00:04:22.881 LINK net_ut 00:04:22.881 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:23.139 LINK pmr_persistence 00:04:23.139 LINK startup 00:04:23.139 CXX test/cpp_headers/mmio.o 00:04:23.139 CXX test/cpp_headers/nbd.o 00:04:23.139 CC test/unit/lib/util/string.c/string_ut.o 00:04:23.139 CC test/nvme/reserve/reserve.o 00:04:23.398 CXX test/cpp_headers/net.o 00:04:23.709 LINK reserve 00:04:23.709 LINK string_ut 00:04:23.709 CC test/nvme/simple_copy/simple_copy.o 00:04:23.709 LINK dif_ut 00:04:23.709 CXX test/cpp_headers/notify.o 00:04:23.709 LINK pipe_ut 00:04:23.968 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:23.968 LINK simple_copy 00:04:23.968 CXX test/cpp_headers/nvme.o 00:04:23.968 CXX test/cpp_headers/nvme_intel.o 00:04:24.226 CC examples/accel/perf/accel_perf.o 00:04:24.226 CC examples/blob/hello_world/hello_blob.o 00:04:24.226 CC test/nvme/connect_stress/connect_stress.o 00:04:24.226 CXX test/cpp_headers/nvme_ocssd.o 00:04:24.226 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:24.226 CC examples/blob/cli/blobcli.o 00:04:24.485 LINK connect_stress 00:04:24.485 CXX test/cpp_headers/nvme_spec.o 00:04:24.485 LINK hello_blob 00:04:24.485 CC test/nvme/boot_partition/boot_partition.o 00:04:24.485 LINK xor_ut 00:04:24.485 CC test/nvme/compliance/nvme_compliance.o 00:04:24.743 CXX test/cpp_headers/nvme_zns.o 00:04:24.743 LINK accel_perf 00:04:24.743 LINK boot_partition 00:04:25.001 CXX test/cpp_headers/nvmf.o 00:04:25.001 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:25.001 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:25.001 LINK blobcli 00:04:25.001 LINK nvme_compliance 00:04:25.259 CXX test/cpp_headers/nvmf_cmd.o 00:04:25.259 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:25.517 CXX test/cpp_headers/nvmf_spec.o 00:04:25.517 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:25.776 LINK json_util_ut 00:04:25.776 CC test/nvme/fused_ordering/fused_ordering.o 00:04:25.776 CXX test/cpp_headers/nvmf_transport.o 00:04:25.776 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:26.034 CXX test/cpp_headers/opal.o 00:04:26.034 LINK fused_ordering 00:04:26.034 CC test/nvme/fdp/fdp.o 00:04:26.034 LINK doorbell_aers 00:04:26.034 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:26.034 LINK pci_event_ut 00:04:26.034 CXX test/cpp_headers/opal_spec.o 00:04:26.292 CXX test/cpp_headers/pci_ids.o 00:04:26.550 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:26.550 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:26.550 LINK esnap 00:04:26.808 LINK fdp 00:04:26.808 CXX test/cpp_headers/pipe.o 00:04:26.808 CXX test/cpp_headers/queue.o 00:04:26.808 LINK idxd_user_ut 00:04:27.066 CXX test/cpp_headers/reduce.o 00:04:27.066 CC test/nvme/cuse/cuse.o 00:04:27.066 CXX test/cpp_headers/rpc.o 00:04:27.066 CXX test/cpp_headers/scheduler.o 00:04:27.066 CC examples/bdev/hello_world/hello_bdev.o 00:04:27.324 CXX test/cpp_headers/scsi.o 00:04:27.324 CC examples/bdev/bdevperf/bdevperf.o 00:04:27.324 CXX test/cpp_headers/scsi_spec.o 00:04:27.324 CXX test/cpp_headers/sock.o 00:04:27.324 LINK json_write_ut 00:04:27.583 CXX test/cpp_headers/stdinc.o 00:04:27.583 LINK hello_bdev 00:04:27.583 CXX test/cpp_headers/string.o 00:04:27.583 CXX test/cpp_headers/thread.o 00:04:27.583 CXX test/cpp_headers/trace.o 00:04:27.583 LINK idxd_ut 00:04:27.841 CXX test/cpp_headers/trace_parser.o 00:04:27.841 CXX test/cpp_headers/tree.o 00:04:27.841 CXX test/cpp_headers/ublk.o 00:04:27.841 CXX test/cpp_headers/util.o 00:04:27.841 CXX test/cpp_headers/uuid.o 00:04:27.841 CXX test/cpp_headers/version.o 00:04:27.841 CXX test/cpp_headers/vfio_user_pci.o 00:04:27.841 CXX test/cpp_headers/vfio_user_spec.o 00:04:27.841 CXX test/cpp_headers/vhost.o 00:04:27.841 LINK json_parse_ut 00:04:28.097 CXX test/cpp_headers/vmd.o 00:04:28.097 CXX test/cpp_headers/xor.o 00:04:28.097 CXX test/cpp_headers/zipf.o 00:04:28.391 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:28.671 LINK cuse 00:04:28.671 LINK bdevperf 00:04:28.930 LINK jsonrpc_server_ut 00:04:29.865 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:30.432 LINK rpc_ut 00:04:31.001 CC examples/nvmf/nvmf/nvmf.o 00:04:31.001 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:31.001 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:31.001 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:31.001 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:31.001 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:04:31.001 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:31.259 LINK nvmf 00:04:31.826 LINK keyring_ut 00:04:31.826 LINK notify_ut 00:04:32.394 LINK iobuf_ut 00:04:32.653 LINK posix_ut 00:04:33.221 LINK sock_ut 00:04:33.486 LINK thread_ut 00:04:33.744 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:33.744 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:33.744 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:33.744 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:33.744 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:33.744 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:33.744 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:33.744 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:33.744 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:34.003 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:34.939 LINK nvme_ns_ut 00:04:34.939 LINK nvme_poll_group_ut 00:04:34.939 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:35.196 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:35.196 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:35.196 LINK nvme_ctrlr_cmd_ut 00:04:35.454 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:35.454 LINK nvme_ut 00:04:35.454 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:35.711 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:35.711 LINK nvme_quirks_ut 00:04:35.970 LINK nvme_ns_ocssd_cmd_ut 00:04:35.970 LINK nvme_ns_cmd_ut 00:04:35.970 LINK nvme_pcie_ut 00:04:35.970 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:36.228 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:36.228 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:36.486 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:36.745 LINK nvme_qpair_ut 00:04:36.745 LINK nvme_transport_ut 00:04:37.003 LINK nvme_io_msg_ut 00:04:37.003 LINK accel_ut 00:04:37.003 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:37.261 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:37.261 LINK nvme_opal_ut 00:04:37.261 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:37.520 LINK blob_bdev_ut 00:04:37.520 LINK nvme_fabric_ut 00:04:37.520 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:37.520 LINK nvme_ctrlr_ut 00:04:37.520 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:04:38.087 LINK nvme_pcie_common_ut 00:04:38.087 LINK rpc_ut 00:04:38.346 LINK subsystem_ut 00:04:38.624 LINK nvme_tcp_ut 00:04:38.912 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:38.912 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:38.912 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:38.912 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:38.912 CC test/unit/lib/event/app.c/app_ut.o 00:04:38.912 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:39.171 LINK scsi_nvme_ut 00:04:39.171 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:39.171 LINK nvme_cuse_ut 00:04:39.430 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:39.430 LINK gpt_ut 00:04:39.689 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:39.689 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:39.947 LINK nvme_rdma_ut 00:04:39.947 LINK app_ut 00:04:40.207 LINK bdev_zone_ut 00:04:40.207 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:40.207 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:40.466 LINK reactor_ut 00:04:40.466 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:40.466 LINK vbdev_lvol_ut 00:04:40.726 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:40.985 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:41.244 LINK bdev_raid_sb_ut 00:04:41.244 LINK vbdev_zone_block_ut 00:04:41.504 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:04:41.504 LINK concat_ut 00:04:41.763 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:42.022 LINK raid1_ut 00:04:42.022 LINK bdev_raid_ut 00:04:42.590 LINK raid0_ut 00:04:43.159 LINK raid5f_ut 00:04:43.159 LINK part_ut 00:04:44.098 LINK bdev_ut 00:04:45.035 LINK bdev_ut 00:04:45.972 LINK blob_ut 00:04:45.972 LINK bdev_nvme_ut 00:04:46.541 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:46.541 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:46.541 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:46.541 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:46.541 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:46.541 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:46.541 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:46.541 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:46.541 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:46.541 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:46.800 LINK blobfs_bdev_ut 00:04:46.800 LINK tree_ut 00:04:47.058 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:47.058 LINK dev_ut 00:04:47.058 LINK scsi_ut 00:04:47.058 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:47.315 LINK ftl_l2p_ut 00:04:47.315 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:47.573 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:47.573 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:47.831 LINK lun_ut 00:04:47.831 LINK scsi_pr_ut 00:04:48.406 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:48.406 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:04:48.406 LINK blobfs_sync_ut 00:04:48.406 LINK blobfs_async_ut 00:04:48.973 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:48.973 LINK scsi_bdev_ut 00:04:48.973 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:49.232 LINK ftl_bitmap_ut 00:04:49.232 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:49.232 LINK lvol_ut 00:04:49.491 LINK ftl_io_ut 00:04:49.491 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:49.792 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:49.792 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:50.051 LINK ftl_mempool_ut 00:04:50.051 LINK ftl_band_ut 00:04:50.051 LINK ftl_p2l_ut 00:04:50.309 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:50.309 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:50.309 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:04:50.309 LINK ftl_mngt_ut 00:04:50.567 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:51.133 LINK subsystem_ut 00:04:51.391 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:51.650 LINK ctrlr_bdev_ut 00:04:51.650 LINK ftl_sb_ut 00:04:51.650 LINK ftl_layout_upgrade_ut 00:04:51.650 LINK ctrlr_ut 00:04:51.907 LINK ctrlr_discovery_ut 00:04:51.907 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:51.907 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:52.165 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:52.165 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:52.423 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:52.423 LINK nvmf_ut 00:04:52.681 LINK init_grp_ut 00:04:52.681 LINK auth_ut 00:04:52.940 LINK tcp_ut 00:04:52.940 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:52.940 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:53.198 LINK param_ut 00:04:54.131 LINK conn_ut 00:04:54.131 LINK portal_grp_ut 00:04:54.696 LINK tgt_node_ut 00:04:55.262 LINK rdma_ut 00:04:55.529 LINK iscsi_ut 00:04:55.529 LINK vhost_ut 00:04:55.529 LINK transport_ut 00:04:56.110 ************************************ 00:04:56.110 END TEST unittest_build 00:04:56.110 ************************************ 00:04:56.110 00:04:56.110 real 1m56.738s 00:04:56.110 user 8m19.469s 00:04:56.110 sys 2m25.895s 00:04:56.110 14:58:51 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:56.110 14:58:51 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:04:56.110 14:58:51 -- common/autotest_common.sh@1142 -- $ return 0 00:04:56.110 14:58:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:56.110 14:58:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:56.110 14:58:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:56.110 14:58:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.110 14:58:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:56.110 14:58:51 -- pm/common@44 -- $ pid=2586 00:04:56.110 14:58:51 -- pm/common@50 -- $ kill -TERM 2586 00:04:56.110 14:58:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.110 14:58:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:56.110 14:58:51 -- pm/common@44 -- $ pid=2588 00:04:56.110 14:58:51 -- pm/common@50 -- $ kill -TERM 2588 00:04:56.110 14:58:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.110 14:58:51 -- nvmf/common.sh@7 -- # uname -s 00:04:56.110 14:58:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.110 14:58:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.110 14:58:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.110 14:58:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.110 14:58:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.110 14:58:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.110 14:58:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.110 14:58:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.110 14:58:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.110 14:58:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.110 14:58:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db4a2233-2afc-4dde-b9ec-9e18d94548e8 00:04:56.110 14:58:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=db4a2233-2afc-4dde-b9ec-9e18d94548e8 00:04:56.110 14:58:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.110 14:58:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.110 14:58:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.110 14:58:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.110 14:58:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.110 14:58:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.110 14:58:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.110 14:58:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.110 14:58:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:56.110 14:58:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:56.111 14:58:51 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:56.111 14:58:51 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:56.111 14:58:51 -- paths/export.sh@6 -- # export PATH 00:04:56.111 14:58:51 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:56.111 14:58:51 -- nvmf/common.sh@47 -- # : 0 00:04:56.111 14:58:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.111 14:58:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.111 14:58:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.111 14:58:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.111 14:58:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.111 14:58:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.111 14:58:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.111 14:58:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.111 14:58:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:56.111 14:58:51 -- spdk/autotest.sh@32 -- # uname -s 00:04:56.111 14:58:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:56.111 14:58:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:56.111 14:58:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.111 14:58:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:56.111 14:58:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.111 14:58:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:56.111 14:58:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:56.111 14:58:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:56.111 14:58:51 -- spdk/autotest.sh@48 -- # udevadm_pid=70494 00:04:56.111 14:58:51 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:56.111 14:58:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:56.111 14:58:51 -- pm/common@17 -- # local monitor 00:04:56.111 14:58:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.111 14:58:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.111 14:58:51 -- pm/common@25 -- # sleep 1 00:04:56.369 14:58:51 -- pm/common@21 -- # date +%s 00:04:56.369 14:58:51 -- pm/common@21 -- # date +%s 00:04:56.369 14:58:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721746731 00:04:56.369 14:58:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721746731 00:04:56.369 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721746731_collect-vmstat.pm.log 00:04:56.369 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721746731_collect-cpu-load.pm.log 00:04:57.303 14:58:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:57.303 14:58:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:57.303 14:58:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.303 14:58:52 -- common/autotest_common.sh@10 -- # set +x 00:04:57.303 14:58:52 -- spdk/autotest.sh@59 -- # create_test_list 00:04:57.303 14:58:52 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:57.303 14:58:52 -- common/autotest_common.sh@10 -- # set +x 00:04:57.304 14:58:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:57.304 14:58:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:57.304 14:58:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:57.304 14:58:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:57.304 14:58:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:57.304 14:58:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:57.304 14:58:52 -- common/autotest_common.sh@1455 -- # uname 00:04:57.304 14:58:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:57.304 14:58:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:57.304 14:58:52 -- common/autotest_common.sh@1475 -- # uname 00:04:57.304 14:58:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:57.304 14:58:52 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:57.304 14:58:52 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:57.304 14:58:52 -- spdk/autotest.sh@72 -- # hash lcov 00:04:57.304 14:58:52 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:57.304 14:58:52 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:57.304 --rc lcov_branch_coverage=1 00:04:57.304 --rc lcov_function_coverage=1 00:04:57.304 --rc genhtml_branch_coverage=1 00:04:57.304 --rc genhtml_function_coverage=1 00:04:57.304 --rc genhtml_legend=1 00:04:57.304 --rc geninfo_all_blocks=1 00:04:57.304 ' 00:04:57.304 14:58:52 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:57.304 --rc lcov_branch_coverage=1 00:04:57.304 --rc lcov_function_coverage=1 00:04:57.304 --rc genhtml_branch_coverage=1 00:04:57.304 --rc genhtml_function_coverage=1 00:04:57.304 --rc genhtml_legend=1 00:04:57.304 --rc geninfo_all_blocks=1 00:04:57.304 ' 00:04:57.304 14:58:52 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:57.304 --rc lcov_branch_coverage=1 00:04:57.304 --rc lcov_function_coverage=1 00:04:57.304 --rc genhtml_branch_coverage=1 00:04:57.304 --rc genhtml_function_coverage=1 00:04:57.304 --rc genhtml_legend=1 00:04:57.304 --rc geninfo_all_blocks=1 00:04:57.304 --no-external' 00:04:57.304 14:58:52 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:57.304 --rc lcov_branch_coverage=1 00:04:57.304 --rc lcov_function_coverage=1 00:04:57.304 --rc genhtml_branch_coverage=1 00:04:57.304 --rc genhtml_function_coverage=1 00:04:57.304 --rc genhtml_legend=1 00:04:57.304 --rc geninfo_all_blocks=1 00:04:57.304 --no-external' 00:04:57.304 14:58:52 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:57.304 lcov: LCOV version 1.15 00:04:57.304 14:58:52 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:03.878 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:03.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:50.572 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:50.572 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:50.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:50.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:50.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:50.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:57.150 14:59:51 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:57.150 14:59:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.150 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:57.150 14:59:51 -- spdk/autotest.sh@91 -- # rm -f 00:05:57.150 14:59:51 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:57.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:05:57.150 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:57.150 14:59:52 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:57.150 14:59:52 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:57.150 14:59:52 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:57.150 14:59:52 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:57.150 14:59:52 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:57.150 14:59:52 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:57.150 14:59:52 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:57.150 14:59:52 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:57.150 14:59:52 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:57.150 14:59:52 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:57.150 14:59:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:57.150 14:59:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:57.150 14:59:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:57.150 14:59:52 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:57.150 14:59:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:57.150 No valid GPT data, bailing 00:05:57.150 14:59:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:57.150 14:59:52 -- scripts/common.sh@391 -- # pt= 00:05:57.150 14:59:52 -- scripts/common.sh@392 -- # return 1 00:05:57.150 14:59:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:57.150 1+0 records in 00:05:57.150 1+0 records out 00:05:57.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442226 s, 237 MB/s 00:05:57.150 14:59:52 -- spdk/autotest.sh@118 -- # sync 00:05:57.150 14:59:52 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:57.150 14:59:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:57.150 14:59:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:59.095 14:59:53 -- spdk/autotest.sh@124 -- # uname -s 00:05:59.095 14:59:54 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:59.095 14:59:54 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:59.095 14:59:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.095 14:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.095 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:59.095 ************************************ 00:05:59.095 START TEST setup.sh 00:05:59.095 ************************************ 00:05:59.095 14:59:54 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:59.095 * Looking for test storage... 00:05:59.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:59.095 14:59:54 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:59.095 14:59:54 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:59.095 14:59:54 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:59.095 14:59:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.095 14:59:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.095 14:59:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:59.095 ************************************ 00:05:59.095 START TEST acl 00:05:59.095 ************************************ 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:59.095 * Looking for test storage... 00:05:59.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:59.095 14:59:54 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:59.095 14:59:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:59.095 14:59:54 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:59.095 14:59:54 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:59.095 14:59:54 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:59.095 14:59:54 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:59.095 14:59:54 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:59.095 14:59:54 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:59.095 14:59:54 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:59.353 14:59:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:59.353 14:59:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:59.353 14:59:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.353 14:59:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:59.353 14:59:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.353 14:59:54 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.920 Hugepages 00:05:59.920 node hugesize free / total 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.920 00:05:59.920 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:59.920 14:59:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:00.180 14:59:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:06:00.180 14:59:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:00.180 14:59:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:00.180 14:59:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:00.180 14:59:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:00.180 14:59:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:00.180 14:59:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:00.180 14:59:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:00.180 14:59:55 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.180 14:59:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.180 14:59:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:00.180 ************************************ 00:06:00.180 START TEST denied 00:06:00.180 ************************************ 00:06:00.180 14:59:55 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:06:00.180 14:59:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:06:00.180 14:59:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:00.180 14:59:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.180 14:59:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:00.180 14:59:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:06:01.559 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:01.559 14:59:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:01.818 00:06:01.818 real 0m1.748s 00:06:01.818 user 0m0.405s 00:06:01.818 sys 0m1.415s 00:06:01.818 14:59:57 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.818 14:59:57 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:01.818 ************************************ 00:06:01.818 END TEST denied 00:06:01.818 ************************************ 00:06:01.818 14:59:57 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:06:01.818 14:59:57 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:01.818 14:59:57 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.818 14:59:57 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.818 14:59:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:01.818 ************************************ 00:06:01.818 START TEST allowed 00:06:01.818 ************************************ 00:06:01.818 14:59:57 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:06:01.818 14:59:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:06:01.818 14:59:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:01.818 14:59:57 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:06:01.818 14:59:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.818 14:59:57 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:03.195 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.195 14:59:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:06:03.195 14:59:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:03.195 14:59:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:03.195 14:59:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:03.195 14:59:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:03.761 00:06:03.761 real 0m1.917s 00:06:03.761 user 0m0.381s 00:06:03.761 sys 0m1.610s 00:06:03.761 14:59:59 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.761 ************************************ 00:06:03.761 14:59:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:03.761 END TEST allowed 00:06:03.761 ************************************ 00:06:03.761 14:59:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:06:03.761 00:06:03.761 real 0m5.066s 00:06:03.761 user 0m1.307s 00:06:03.761 sys 0m3.992s 00:06:03.761 14:59:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.761 14:59:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:03.761 ************************************ 00:06:03.761 END TEST acl 00:06:03.761 ************************************ 00:06:04.021 14:59:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:04.021 14:59:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:04.021 14:59:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.021 14:59:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.021 14:59:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:04.021 ************************************ 00:06:04.021 START TEST hugepages 00:06:04.021 ************************************ 00:06:04.021 14:59:59 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:04.021 * Looking for test storage... 00:06:04.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:04.021 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:04.021 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:04.021 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 1795752 kB' 'MemAvailable: 7352480 kB' 'Buffers: 39984 kB' 'Cached: 5618968 kB' 'SwapCached: 0 kB' 'Active: 402764 kB' 'Inactive: 5356036 kB' 'Active(anon): 111196 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356036 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 128760 kB' 'Mapped: 58008 kB' 'Shmem: 2600 kB' 'KReclaimable: 230876 kB' 'Slab: 316764 kB' 'SReclaimable: 230876 kB' 'SUnreclaim: 85888 kB' 'KernelStack: 5056 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4026012 kB' 'Committed_AS: 361396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.022 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.023 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:04.024 14:59:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:04.024 14:59:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.024 14:59:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.024 14:59:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:04.024 ************************************ 00:06:04.024 START TEST default_setup 00:06:04.024 ************************************ 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.024 14:59:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:04.591 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3871248 kB' 'MemAvailable: 9427916 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418824 kB' 'Inactive: 5356044 kB' 'Active(anon): 127256 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356044 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 144772 kB' 'Mapped: 58012 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316656 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85848 kB' 'KernelStack: 5008 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.161 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.162 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3871000 kB' 'MemAvailable: 9427668 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418612 kB' 'Inactive: 5356044 kB' 'Active(anon): 127044 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356044 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 144552 kB' 'Mapped: 58008 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316656 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85848 kB' 'KernelStack: 4960 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.163 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.164 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.165 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3873100 kB' 'MemAvailable: 9429768 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418596 kB' 'Inactive: 5356044 kB' 'Active(anon): 127028 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356044 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 144572 kB' 'Mapped: 58008 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316656 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85848 kB' 'KernelStack: 4992 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.166 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.167 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:05.168 nr_hugepages=1024 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:05.168 resv_hugepages=0 00:06:05.168 surplus_hugepages=0 00:06:05.168 anon_hugepages=0 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3873100 kB' 'MemAvailable: 9429768 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418892 kB' 'Inactive: 5356044 kB' 'Active(anon): 127324 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356044 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 144860 kB' 'Mapped: 58008 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316652 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85844 kB' 'KernelStack: 4992 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.168 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.169 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3873496 kB' 'MemUsed: 8372832 kB' 'SwapCached: 0 kB' 'Active: 418780 kB' 'Inactive: 5356044 kB' 'Active(anon): 127212 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356044 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 5658960 kB' 'Mapped: 58008 kB' 'AnonPages: 144512 kB' 'Shmem: 2592 kB' 'KernelStack: 4976 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230808 kB' 'Slab: 316632 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.170 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:05.171 node0=1024 expecting 1024 00:06:05.171 ************************************ 00:06:05.171 END TEST default_setup 00:06:05.171 ************************************ 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:05.171 00:06:05.171 real 0m1.168s 00:06:05.171 user 0m0.312s 00:06:05.171 sys 0m0.833s 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.171 15:00:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:05.430 15:00:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:05.430 15:00:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:05.430 15:00:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.430 15:00:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.430 15:00:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:05.430 ************************************ 00:06:05.430 START TEST per_node_1G_alloc 00:06:05.430 ************************************ 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.430 15:00:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:05.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:05.716 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.979 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:05.979 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:05.979 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:05.979 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:05.979 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:05.979 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:05.979 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:05.979 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4920032 kB' 'MemAvailable: 10476708 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418840 kB' 'Inactive: 5356052 kB' 'Active(anon): 127272 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356052 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 144804 kB' 'Mapped: 58008 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316628 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85820 kB' 'KernelStack: 5008 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598876 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.980 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4920032 kB' 'MemAvailable: 10476708 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418752 kB' 'Inactive: 5356052 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356052 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 144816 kB' 'Mapped: 58008 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316620 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85812 kB' 'KernelStack: 5008 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598876 kB' 'Committed_AS: 377956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.981 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4920284 kB' 'MemAvailable: 10476960 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418556 kB' 'Inactive: 5356052 kB' 'Active(anon): 126988 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356052 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 144792 kB' 'Mapped: 58008 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316600 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85792 kB' 'KernelStack: 4976 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598876 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.982 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.983 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:05.984 nr_hugepages=512 00:06:05.984 resv_hugepages=0 00:06:05.984 surplus_hugepages=0 00:06:05.984 anon_hugepages=0 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4920284 kB' 'MemAvailable: 10476960 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418596 kB' 'Inactive: 5356052 kB' 'Active(anon): 127028 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356052 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 144836 kB' 'Mapped: 58008 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316600 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85792 kB' 'KernelStack: 4992 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598876 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.984 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4920284 kB' 'MemUsed: 7326044 kB' 'SwapCached: 0 kB' 'Active: 418544 kB' 'Inactive: 5356052 kB' 'Active(anon): 126976 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356052 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 5658960 kB' 'Mapped: 58008 kB' 'AnonPages: 144780 kB' 'Shmem: 2592 kB' 'KernelStack: 4976 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230808 kB' 'Slab: 316600 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.985 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.986 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.986 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.986 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.986 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.986 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.986 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.986 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:05.986 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.243 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:06.244 node0=512 expecting 512 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:06.244 00:06:06.244 real 0m0.786s 00:06:06.244 user 0m0.272s 00:06:06.244 sys 0m0.536s 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.244 ************************************ 00:06:06.244 END TEST per_node_1G_alloc 00:06:06.244 ************************************ 00:06:06.244 15:00:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:06.244 15:00:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:06.244 15:00:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:06.244 15:00:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.244 15:00:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.244 15:00:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:06.244 ************************************ 00:06:06.244 START TEST even_2G_alloc 00:06:06.244 ************************************ 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.244 15:00:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:06.501 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3872248 kB' 'MemAvailable: 9428928 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 419144 kB' 'Inactive: 5356056 kB' 'Active(anon): 127576 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 144856 kB' 'Mapped: 58016 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316628 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85820 kB' 'KernelStack: 5008 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 378520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.071 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.072 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3872500 kB' 'MemAvailable: 9429176 kB' 'Buffers: 39984 kB' 'Cached: 5618976 kB' 'SwapCached: 0 kB' 'Active: 418728 kB' 'Inactive: 5356052 kB' 'Active(anon): 127160 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356052 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 144708 kB' 'Mapped: 58024 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316628 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85820 kB' 'KernelStack: 4992 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.073 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.074 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3872500 kB' 'MemAvailable: 9429180 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418648 kB' 'Inactive: 5356056 kB' 'Active(anon): 127080 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 144924 kB' 'Mapped: 58012 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316620 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85812 kB' 'KernelStack: 4992 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.075 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.076 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:07.077 nr_hugepages=1024 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:07.077 resv_hugepages=0 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:07.077 surplus_hugepages=0 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:07.077 anon_hugepages=0 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3872500 kB' 'MemAvailable: 9429180 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418528 kB' 'Inactive: 5356056 kB' 'Active(anon): 126960 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 144604 kB' 'Mapped: 58012 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316612 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85804 kB' 'KernelStack: 4992 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.077 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.078 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3872500 kB' 'MemUsed: 8373828 kB' 'SwapCached: 0 kB' 'Active: 418552 kB' 'Inactive: 5356056 kB' 'Active(anon): 126984 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 5658964 kB' 'Mapped: 58012 kB' 'AnonPages: 144624 kB' 'Shmem: 2592 kB' 'KernelStack: 4992 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230808 kB' 'Slab: 316612 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.079 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:07.080 node0=1024 expecting 1024 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:07.080 00:06:07.080 real 0m0.940s 00:06:07.080 user 0m0.247s 00:06:07.080 sys 0m0.738s 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.080 15:00:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:07.080 ************************************ 00:06:07.080 END TEST even_2G_alloc 00:06:07.080 ************************************ 00:06:07.080 15:00:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:07.080 15:00:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:07.080 15:00:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.080 15:00:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.080 15:00:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:07.080 ************************************ 00:06:07.080 START TEST odd_alloc 00:06:07.080 ************************************ 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:07.080 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:07.081 15:00:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:07.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:07.647 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3870280 kB' 'MemAvailable: 9426960 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 419136 kB' 'Inactive: 5356056 kB' 'Active(anon): 127568 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 144840 kB' 'Mapped: 58024 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316652 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85844 kB' 'KernelStack: 5008 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073564 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.906 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.907 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3870532 kB' 'MemAvailable: 9427212 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418716 kB' 'Inactive: 5356056 kB' 'Active(anon): 127148 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 145024 kB' 'Mapped: 58012 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316652 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85844 kB' 'KernelStack: 5008 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073564 kB' 'Committed_AS: 380680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.170 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.171 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3870532 kB' 'MemAvailable: 9427212 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418740 kB' 'Inactive: 5356056 kB' 'Active(anon): 127172 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 144972 kB' 'Mapped: 58012 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316632 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85824 kB' 'KernelStack: 4976 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073564 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.172 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:08.173 nr_hugepages=1025 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:08.173 resv_hugepages=0 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:08.173 surplus_hugepages=0 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:08.173 anon_hugepages=0 00:06:08.173 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3870532 kB' 'MemAvailable: 9427212 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418624 kB' 'Inactive: 5356056 kB' 'Active(anon): 127056 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 144628 kB' 'Mapped: 58012 kB' 'Shmem: 2592 kB' 'KReclaimable: 230808 kB' 'Slab: 316624 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85816 kB' 'KernelStack: 4992 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073564 kB' 'Committed_AS: 378344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.174 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:08.175 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3870532 kB' 'MemUsed: 8375796 kB' 'SwapCached: 0 kB' 'Active: 418624 kB' 'Inactive: 5356056 kB' 'Active(anon): 127056 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'FilePages: 5658964 kB' 'Mapped: 58012 kB' 'AnonPages: 144624 kB' 'Shmem: 2592 kB' 'KernelStack: 4992 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230808 kB' 'Slab: 316624 kB' 'SReclaimable: 230808 kB' 'SUnreclaim: 85816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.176 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:08.177 node0=1025 expecting 1025 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:08.177 00:06:08.177 real 0m0.982s 00:06:08.177 user 0m0.302s 00:06:08.177 sys 0m0.726s 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.177 15:00:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:08.177 ************************************ 00:06:08.177 END TEST odd_alloc 00:06:08.177 ************************************ 00:06:08.177 15:00:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:08.177 15:00:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:08.177 15:00:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.177 15:00:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.177 15:00:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:08.177 ************************************ 00:06:08.177 START TEST custom_alloc 00:06:08.177 ************************************ 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:08.177 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.178 15:00:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:08.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:08.694 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4919680 kB' 'MemAvailable: 10476356 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418132 kB' 'Inactive: 5356056 kB' 'Active(anon): 126564 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 68 kB' 'Writeback: 0 kB' 'AnonPages: 143860 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316628 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85824 kB' 'KernelStack: 4960 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598876 kB' 'Committed_AS: 369356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.694 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.957 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.958 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4919680 kB' 'MemAvailable: 10476356 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418032 kB' 'Inactive: 5356056 kB' 'Active(anon): 126464 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 68 kB' 'Writeback: 0 kB' 'AnonPages: 143964 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316628 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85824 kB' 'KernelStack: 4928 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598876 kB' 'Committed_AS: 369356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.959 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.960 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4919680 kB' 'MemAvailable: 10476356 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 417916 kB' 'Inactive: 5356056 kB' 'Active(anon): 126348 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 68 kB' 'Writeback: 0 kB' 'AnonPages: 143876 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316628 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85824 kB' 'KernelStack: 4928 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598876 kB' 'Committed_AS: 369356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.961 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:08.962 nr_hugepages=512 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:08.962 resv_hugepages=0 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:08.962 surplus_hugepages=0 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:08.962 anon_hugepages=0 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:08.962 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4919680 kB' 'MemAvailable: 10476356 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 417624 kB' 'Inactive: 5356056 kB' 'Active(anon): 126056 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 68 kB' 'Writeback: 0 kB' 'AnonPages: 143892 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316628 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85824 kB' 'KernelStack: 4944 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598876 kB' 'Committed_AS: 369356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.963 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.964 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 4919680 kB' 'MemUsed: 7326648 kB' 'SwapCached: 0 kB' 'Active: 417896 kB' 'Inactive: 5356056 kB' 'Active(anon): 126328 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 68 kB' 'Writeback: 0 kB' 'FilePages: 5658964 kB' 'Mapped: 57132 kB' 'AnonPages: 143892 kB' 'Shmem: 2592 kB' 'KernelStack: 4944 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230804 kB' 'Slab: 316628 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.965 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:08.966 node0=512 expecting 512 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:08.966 00:06:08.966 real 0m0.743s 00:06:08.966 user 0m0.258s 00:06:08.966 sys 0m0.532s 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.966 ************************************ 00:06:08.966 END TEST custom_alloc 00:06:08.966 ************************************ 00:06:08.966 15:00:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 15:00:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:08.966 15:00:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:08.966 15:00:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.966 15:00:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.966 15:00:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 ************************************ 00:06:08.966 START TEST no_shrink_alloc 00:06:08.966 ************************************ 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.966 15:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:09.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:09.532 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3870964 kB' 'MemAvailable: 9427640 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 417872 kB' 'Inactive: 5356056 kB' 'Active(anon): 126304 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'AnonPages: 143888 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316652 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85848 kB' 'KernelStack: 4960 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 369484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.794 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:09.795 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3870964 kB' 'MemAvailable: 9427640 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 417872 kB' 'Inactive: 5356056 kB' 'Active(anon): 126304 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'AnonPages: 143888 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316648 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85844 kB' 'KernelStack: 4944 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 369484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.796 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.797 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3870964 kB' 'MemAvailable: 9427640 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 417608 kB' 'Inactive: 5356056 kB' 'Active(anon): 126040 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'AnonPages: 143900 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316648 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85844 kB' 'KernelStack: 4928 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 369484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.798 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.799 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:09.800 nr_hugepages=1024 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:09.800 resv_hugepages=0 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:09.800 surplus_hugepages=0 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:09.800 anon_hugepages=0 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3871588 kB' 'MemAvailable: 9428264 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 417680 kB' 'Inactive: 5356056 kB' 'Active(anon): 126112 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'AnonPages: 143668 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316648 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85844 kB' 'KernelStack: 4944 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 369484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.800 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.801 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.060 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3871588 kB' 'MemUsed: 8374740 kB' 'SwapCached: 0 kB' 'Active: 417676 kB' 'Inactive: 5356056 kB' 'Active(anon): 126108 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 76 kB' 'Writeback: 0 kB' 'FilePages: 5658964 kB' 'Mapped: 57132 kB' 'AnonPages: 143668 kB' 'Shmem: 2592 kB' 'KernelStack: 4928 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230804 kB' 'Slab: 316648 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.061 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:10.062 node0=1024 expecting 1024 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:10.062 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:10.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:10.323 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:10.323 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.323 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3868332 kB' 'MemAvailable: 9425008 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418136 kB' 'Inactive: 5356056 kB' 'Active(anon): 126568 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 84 kB' 'Writeback: 0 kB' 'AnonPages: 144384 kB' 'Mapped: 57176 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316652 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85848 kB' 'KernelStack: 4976 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 369484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3868592 kB' 'MemAvailable: 9425268 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 418180 kB' 'Inactive: 5356056 kB' 'Active(anon): 126612 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 84 kB' 'Writeback: 0 kB' 'AnonPages: 144188 kB' 'Mapped: 57172 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316652 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85848 kB' 'KernelStack: 4992 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 369484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.324 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3868732 kB' 'MemAvailable: 9425408 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 417812 kB' 'Inactive: 5356056 kB' 'Active(anon): 126244 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 84 kB' 'Writeback: 0 kB' 'AnonPages: 143812 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316652 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85848 kB' 'KernelStack: 4928 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 369484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.325 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.326 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.327 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.327 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.327 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:10.588 nr_hugepages=1024 00:06:10.588 resv_hugepages=0 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:10.588 surplus_hugepages=0 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:10.588 anon_hugepages=0 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3868732 kB' 'MemAvailable: 9425408 kB' 'Buffers: 39984 kB' 'Cached: 5618980 kB' 'SwapCached: 0 kB' 'Active: 417672 kB' 'Inactive: 5356056 kB' 'Active(anon): 126104 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 84 kB' 'Writeback: 0 kB' 'AnonPages: 143664 kB' 'Mapped: 57132 kB' 'Shmem: 2592 kB' 'KReclaimable: 230804 kB' 'Slab: 316640 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85836 kB' 'KernelStack: 4944 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074588 kB' 'Committed_AS: 369484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 4034560 kB' 'DirectMap1G: 10485760 kB' 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.588 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.589 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246328 kB' 'MemFree: 3869148 kB' 'MemUsed: 8377180 kB' 'SwapCached: 0 kB' 'Active: 417636 kB' 'Inactive: 5356056 kB' 'Active(anon): 126068 kB' 'Inactive(anon): 0 kB' 'Active(file): 291568 kB' 'Inactive(file): 5356056 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 84 kB' 'Writeback: 0 kB' 'FilePages: 5658964 kB' 'Mapped: 57132 kB' 'AnonPages: 143828 kB' 'Shmem: 2592 kB' 'KernelStack: 4928 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230804 kB' 'Slab: 316640 kB' 'SReclaimable: 230804 kB' 'SUnreclaim: 85836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.590 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.591 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:10.592 node0=1024 expecting 1024 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:10.592 00:06:10.592 real 0m1.495s 00:06:10.592 user 0m0.519s 00:06:10.592 sys 0m1.075s 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.592 15:00:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:10.592 ************************************ 00:06:10.592 END TEST no_shrink_alloc 00:06:10.592 ************************************ 00:06:10.592 15:00:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:10.592 15:00:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:10.592 00:06:10.592 real 0m6.629s 00:06:10.592 user 0m2.077s 00:06:10.592 sys 0m4.806s 00:06:10.592 15:00:05 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.592 15:00:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:10.592 ************************************ 00:06:10.592 END TEST hugepages 00:06:10.592 ************************************ 00:06:10.592 15:00:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:10.592 15:00:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:10.592 15:00:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.592 15:00:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.592 15:00:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:10.592 ************************************ 00:06:10.592 START TEST driver 00:06:10.592 ************************************ 00:06:10.592 15:00:05 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:10.861 * Looking for test storage... 00:06:10.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:10.861 15:00:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:10.861 15:00:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:10.861 15:00:06 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:11.426 15:00:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:11.426 15:00:06 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.426 15:00:06 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.426 15:00:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:11.426 ************************************ 00:06:11.426 START TEST guess_driver 00:06:11.426 ************************************ 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio.ko.zst 00:06:11.426 insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio_pci_generic.ko.zst == *\.\k\o* ]] 00:06:11.426 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:11.427 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:11.427 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:11.427 Looking for driver=uio_pci_generic 00:06:11.427 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:11.427 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:11.427 15:00:06 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:11.427 15:00:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.427 15:00:06 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:11.684 15:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:11.685 15:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:11.685 15:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:11.943 15:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:11.943 15:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:11.943 15:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:12.879 15:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:12.879 15:00:07 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:12.879 15:00:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:12.879 15:00:07 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:13.136 00:06:13.136 real 0m1.932s 00:06:13.136 user 0m0.370s 00:06:13.136 sys 0m1.635s 00:06:13.136 15:00:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.136 15:00:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:13.136 ************************************ 00:06:13.136 END TEST guess_driver 00:06:13.136 ************************************ 00:06:13.393 15:00:08 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:06:13.393 00:06:13.393 real 0m2.651s 00:06:13.393 user 0m0.612s 00:06:13.393 sys 0m2.187s 00:06:13.393 15:00:08 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.393 15:00:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:13.393 ************************************ 00:06:13.393 END TEST driver 00:06:13.393 ************************************ 00:06:13.393 15:00:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:13.393 15:00:08 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:13.393 15:00:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.393 15:00:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.393 15:00:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:13.393 ************************************ 00:06:13.393 START TEST devices 00:06:13.393 ************************************ 00:06:13.393 15:00:08 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:13.393 * Looking for test storage... 00:06:13.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:13.393 15:00:08 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:13.393 15:00:08 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:13.393 15:00:08 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:13.393 15:00:08 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:13.959 15:00:09 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:13.959 15:00:09 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:13.959 No valid GPT data, bailing 00:06:13.959 15:00:09 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:13.959 15:00:09 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:13.959 15:00:09 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:13.959 15:00:09 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:13.959 15:00:09 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:13.959 15:00:09 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:13.959 15:00:09 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.959 15:00:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:13.959 ************************************ 00:06:13.959 START TEST nvme_mount 00:06:13.959 ************************************ 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:13.959 15:00:09 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:15.334 Creating new GPT entries in memory. 00:06:15.334 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:15.334 other utilities. 00:06:15.334 15:00:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:15.334 15:00:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:15.334 15:00:10 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:15.334 15:00:10 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:15.334 15:00:10 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:16.267 Creating new GPT entries in memory. 00:06:16.267 The operation has completed successfully. 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 74617 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.267 15:00:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:16.529 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:16.529 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:16.529 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:16.529 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.529 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:16.529 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.529 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:16.529 15:00:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:17.468 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:17.468 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:17.727 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:17.727 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:17.727 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:17.727 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.727 15:00:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:17.984 15:00:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:17.984 15:00:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:17.984 15:00:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:17.984 15:00:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.984 15:00:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:17.984 15:00:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.984 15:00:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:17.984 15:00:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:18.924 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:18.925 15:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.925 15:00:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:19.182 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:19.183 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:19.183 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:19.183 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.183 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:19.183 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.183 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:19.183 15:00:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:20.122 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:20.122 00:06:20.122 real 0m6.085s 00:06:20.122 user 0m0.532s 00:06:20.122 sys 0m3.375s 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.122 15:00:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:20.122 ************************************ 00:06:20.122 END TEST nvme_mount 00:06:20.122 ************************************ 00:06:20.122 15:00:15 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:20.122 15:00:15 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:20.122 15:00:15 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.122 15:00:15 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.122 15:00:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:20.122 ************************************ 00:06:20.122 START TEST dm_mount 00:06:20.122 ************************************ 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:20.122 15:00:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:21.515 Creating new GPT entries in memory. 00:06:21.515 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:21.515 other utilities. 00:06:21.515 15:00:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:21.515 15:00:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:21.515 15:00:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:21.515 15:00:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:21.515 15:00:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:22.449 Creating new GPT entries in memory. 00:06:22.449 The operation has completed successfully. 00:06:22.449 15:00:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:22.449 15:00:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:22.449 15:00:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:22.449 15:00:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:22.449 15:00:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:23.384 The operation has completed successfully. 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 75048 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:23.384 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.385 15:00:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:23.644 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:23.644 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:23.644 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:23.644 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.644 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:23.644 15:00:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.644 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:23.644 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:24.580 15:00:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:24.838 15:00:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:24.838 15:00:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:24.838 15:00:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:24.838 15:00:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.839 15:00:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:24.839 15:00:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.839 15:00:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:24.839 15:00:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:25.774 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:25.774 00:06:25.774 real 0m5.590s 00:06:25.774 user 0m0.351s 00:06:25.774 sys 0m2.235s 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.774 ************************************ 00:06:25.774 END TEST dm_mount 00:06:25.774 ************************************ 00:06:25.774 15:00:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:25.774 15:00:21 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:25.774 15:00:21 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:25.774 15:00:21 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:25.774 15:00:21 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:25.774 15:00:21 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:25.774 15:00:21 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:25.774 15:00:21 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:25.774 15:00:21 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:26.033 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:26.033 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:26.033 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:26.033 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:26.033 15:00:21 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:26.033 15:00:21 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:26.033 15:00:21 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:26.033 15:00:21 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:26.033 15:00:21 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:26.033 15:00:21 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:26.033 15:00:21 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:26.033 ************************************ 00:06:26.033 END TEST devices 00:06:26.033 ************************************ 00:06:26.033 00:06:26.033 real 0m12.793s 00:06:26.033 user 0m1.222s 00:06:26.033 sys 0m6.171s 00:06:26.033 15:00:21 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.033 15:00:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:26.301 15:00:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:26.301 ************************************ 00:06:26.301 END TEST setup.sh 00:06:26.301 ************************************ 00:06:26.301 00:06:26.301 real 0m27.448s 00:06:26.301 user 0m5.325s 00:06:26.301 sys 0m17.369s 00:06:26.301 15:00:21 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.301 15:00:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:26.301 15:00:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.301 15:00:21 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:26.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:26.588 Hugepages 00:06:26.588 node hugesize free / total 00:06:26.588 node0 1048576kB 0 / 0 00:06:26.588 node0 2048kB 2048 / 2048 00:06:26.588 00:06:26.588 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:26.846 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:26.846 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:26.846 15:00:22 -- spdk/autotest.sh@130 -- # uname -s 00:06:26.846 15:00:22 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:26.846 15:00:22 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:26.846 15:00:22 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:27.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:27.423 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:28.362 15:00:23 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:29.301 15:00:24 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:29.301 15:00:24 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:29.301 15:00:24 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:29.301 15:00:24 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:29.301 15:00:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:29.301 15:00:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:29.301 15:00:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:29.301 15:00:24 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:29.301 15:00:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:29.301 15:00:24 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:29.301 15:00:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:06:29.301 15:00:24 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:29.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:29.559 Waiting for block devices as requested 00:06:29.818 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:29.818 15:00:25 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:29.818 15:00:25 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:29.818 15:00:25 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:29.818 15:00:25 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:29.818 15:00:25 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:29.818 15:00:25 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:06:29.818 15:00:25 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:29.818 15:00:25 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:29.818 15:00:25 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:29.818 15:00:25 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:29.818 15:00:25 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:29.818 15:00:25 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:29.818 15:00:25 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:29.818 15:00:25 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:29.818 15:00:25 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:29.818 15:00:25 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:29.818 15:00:25 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:29.818 15:00:25 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:29.818 15:00:25 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:29.818 15:00:25 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:29.818 15:00:25 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:29.818 15:00:25 -- common/autotest_common.sh@1557 -- # continue 00:06:29.818 15:00:25 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:29.818 15:00:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.818 15:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:29.818 15:00:25 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:29.818 15:00:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.818 15:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:29.818 15:00:25 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:30.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:06:30.385 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:31.324 15:00:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:31.324 15:00:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.324 15:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.324 15:00:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:31.324 15:00:26 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:31.324 15:00:26 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:31.324 15:00:26 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:31.324 15:00:26 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:31.324 15:00:26 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:31.324 15:00:26 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:31.324 15:00:26 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:31.324 15:00:26 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:31.324 15:00:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:31.324 15:00:26 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:31.324 15:00:26 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:31.324 15:00:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:06:31.324 15:00:26 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:31.324 15:00:26 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:31.324 15:00:26 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:31.324 15:00:26 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:31.324 15:00:26 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:31.324 15:00:26 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:31.324 15:00:26 -- common/autotest_common.sh@1593 -- # return 0 00:06:31.324 15:00:26 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:06:31.324 15:00:26 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:31.324 15:00:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.324 15:00:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.324 15:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.324 ************************************ 00:06:31.324 START TEST unittest 00:06:31.324 ************************************ 00:06:31.324 15:00:26 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:31.324 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:31.324 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:31.324 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:31.324 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:31.324 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:31.324 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:31.324 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:31.324 ++ rpc_py=rpc_cmd 00:06:31.324 ++ set -e 00:06:31.324 ++ shopt -s nullglob 00:06:31.324 ++ shopt -s extglob 00:06:31.324 ++ shopt -s inherit_errexit 00:06:31.324 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:31.324 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:31.324 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:31.324 +++ CONFIG_WPDK_DIR= 00:06:31.324 +++ CONFIG_ASAN=y 00:06:31.324 +++ CONFIG_VBDEV_COMPRESS=n 00:06:31.324 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:31.324 +++ CONFIG_USDT=n 00:06:31.324 +++ CONFIG_CUSTOMOCF=n 00:06:31.324 +++ CONFIG_PREFIX=/usr/local 00:06:31.324 +++ CONFIG_RBD=n 00:06:31.324 +++ CONFIG_LIBDIR= 00:06:31.324 +++ CONFIG_IDXD=y 00:06:31.324 +++ CONFIG_NVME_CUSE=y 00:06:31.324 +++ CONFIG_SMA=n 00:06:31.324 +++ CONFIG_VTUNE=n 00:06:31.324 +++ CONFIG_TSAN=n 00:06:31.324 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:31.324 +++ CONFIG_VFIO_USER_DIR= 00:06:31.324 +++ CONFIG_PGO_CAPTURE=n 00:06:31.324 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:31.324 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:31.324 +++ CONFIG_LTO=n 00:06:31.324 +++ CONFIG_ISCSI_INITIATOR=y 00:06:31.324 +++ CONFIG_CET=n 00:06:31.324 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:31.324 +++ CONFIG_OCF_PATH= 00:06:31.324 +++ CONFIG_RDMA_SET_TOS=y 00:06:31.324 +++ CONFIG_HAVE_ARC4RANDOM=y 00:06:31.324 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:31.324 +++ CONFIG_UBLK=y 00:06:31.324 +++ CONFIG_ISAL_CRYPTO=y 00:06:31.324 +++ CONFIG_OPENSSL_PATH= 00:06:31.324 +++ CONFIG_OCF=n 00:06:31.324 +++ CONFIG_FUSE=n 00:06:31.324 +++ CONFIG_VTUNE_DIR= 00:06:31.324 +++ CONFIG_FUZZER_LIB= 00:06:31.324 +++ CONFIG_FUZZER=n 00:06:31.324 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:31.324 +++ CONFIG_CRYPTO=n 00:06:31.324 +++ CONFIG_PGO_USE=n 00:06:31.324 +++ CONFIG_VHOST=y 00:06:31.324 +++ CONFIG_DAOS=n 00:06:31.324 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:31.324 +++ CONFIG_DAOS_DIR= 00:06:31.324 +++ CONFIG_UNIT_TESTS=y 00:06:31.324 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:31.324 +++ CONFIG_VIRTIO=y 00:06:31.324 +++ CONFIG_DPDK_UADK=n 00:06:31.324 +++ CONFIG_COVERAGE=y 00:06:31.324 +++ CONFIG_RDMA=y 00:06:31.324 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:31.324 +++ CONFIG_URING_PATH= 00:06:31.324 +++ CONFIG_XNVME=n 00:06:31.324 +++ CONFIG_VFIO_USER=n 00:06:31.324 +++ CONFIG_ARCH=native 00:06:31.324 +++ CONFIG_HAVE_EVP_MAC=y 00:06:31.324 +++ CONFIG_URING_ZNS=n 00:06:31.324 +++ CONFIG_WERROR=y 00:06:31.324 +++ CONFIG_HAVE_LIBBSD=n 00:06:31.324 +++ CONFIG_UBSAN=y 00:06:31.324 +++ CONFIG_IPSEC_MB_DIR= 00:06:31.324 +++ CONFIG_GOLANG=n 00:06:31.324 +++ CONFIG_ISAL=y 00:06:31.324 +++ CONFIG_IDXD_KERNEL=y 00:06:31.324 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:31.324 +++ CONFIG_RDMA_PROV=verbs 00:06:31.324 +++ CONFIG_APPS=y 00:06:31.324 +++ CONFIG_SHARED=n 00:06:31.324 +++ CONFIG_HAVE_KEYUTILS=y 00:06:31.324 +++ CONFIG_FC_PATH= 00:06:31.324 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:31.324 +++ CONFIG_FC=n 00:06:31.324 +++ CONFIG_AVAHI=n 00:06:31.324 +++ CONFIG_FIO_PLUGIN=y 00:06:31.324 +++ CONFIG_RAID5F=y 00:06:31.324 +++ CONFIG_EXAMPLES=y 00:06:31.324 +++ CONFIG_TESTS=y 00:06:31.324 +++ CONFIG_CRYPTO_MLX5=n 00:06:31.324 +++ CONFIG_MAX_LCORES=128 00:06:31.324 +++ CONFIG_IPSEC_MB=n 00:06:31.324 +++ CONFIG_PGO_DIR= 00:06:31.324 +++ CONFIG_DEBUG=y 00:06:31.324 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:31.324 +++ CONFIG_CROSS_PREFIX= 00:06:31.324 +++ CONFIG_URING=n 00:06:31.324 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:31.324 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:31.324 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:31.324 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:31.324 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:31.324 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:31.324 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:31.324 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:31.324 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:31.324 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:31.324 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:31.324 +++ VHOST_APP=("$_app_dir/vhost") 00:06:31.324 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:31.324 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:31.325 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:31.325 +++ [[ #ifndef SPDK_CONFIG_H 00:06:31.325 #define SPDK_CONFIG_H 00:06:31.325 #define SPDK_CONFIG_APPS 1 00:06:31.325 #define SPDK_CONFIG_ARCH native 00:06:31.325 #define SPDK_CONFIG_ASAN 1 00:06:31.325 #undef SPDK_CONFIG_AVAHI 00:06:31.325 #undef SPDK_CONFIG_CET 00:06:31.325 #define SPDK_CONFIG_COVERAGE 1 00:06:31.325 #define SPDK_CONFIG_CROSS_PREFIX 00:06:31.325 #undef SPDK_CONFIG_CRYPTO 00:06:31.325 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:31.325 #undef SPDK_CONFIG_CUSTOMOCF 00:06:31.325 #undef SPDK_CONFIG_DAOS 00:06:31.325 #define SPDK_CONFIG_DAOS_DIR 00:06:31.325 #define SPDK_CONFIG_DEBUG 1 00:06:31.325 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:31.325 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:06:31.325 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:06:31.325 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:06:31.325 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:31.325 #undef SPDK_CONFIG_DPDK_UADK 00:06:31.325 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:31.325 #define SPDK_CONFIG_EXAMPLES 1 00:06:31.325 #undef SPDK_CONFIG_FC 00:06:31.325 #define SPDK_CONFIG_FC_PATH 00:06:31.325 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:31.325 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:31.325 #undef SPDK_CONFIG_FUSE 00:06:31.325 #undef SPDK_CONFIG_FUZZER 00:06:31.325 #define SPDK_CONFIG_FUZZER_LIB 00:06:31.325 #undef SPDK_CONFIG_GOLANG 00:06:31.325 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:31.325 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:31.325 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:31.325 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:31.325 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:31.325 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:31.325 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:31.325 #define SPDK_CONFIG_IDXD 1 00:06:31.325 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:31.325 #undef SPDK_CONFIG_IPSEC_MB 00:06:31.325 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:31.325 #define SPDK_CONFIG_ISAL 1 00:06:31.325 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:31.325 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:31.325 #define SPDK_CONFIG_LIBDIR 00:06:31.325 #undef SPDK_CONFIG_LTO 00:06:31.325 #define SPDK_CONFIG_MAX_LCORES 128 00:06:31.325 #define SPDK_CONFIG_NVME_CUSE 1 00:06:31.325 #undef SPDK_CONFIG_OCF 00:06:31.325 #define SPDK_CONFIG_OCF_PATH 00:06:31.325 #define SPDK_CONFIG_OPENSSL_PATH 00:06:31.325 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:31.325 #define SPDK_CONFIG_PGO_DIR 00:06:31.325 #undef SPDK_CONFIG_PGO_USE 00:06:31.325 #define SPDK_CONFIG_PREFIX /usr/local 00:06:31.325 #define SPDK_CONFIG_RAID5F 1 00:06:31.325 #undef SPDK_CONFIG_RBD 00:06:31.325 #define SPDK_CONFIG_RDMA 1 00:06:31.325 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:31.325 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:31.325 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:31.325 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:31.325 #undef SPDK_CONFIG_SHARED 00:06:31.325 #undef SPDK_CONFIG_SMA 00:06:31.325 #define SPDK_CONFIG_TESTS 1 00:06:31.325 #undef SPDK_CONFIG_TSAN 00:06:31.325 #define SPDK_CONFIG_UBLK 1 00:06:31.325 #define SPDK_CONFIG_UBSAN 1 00:06:31.325 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:31.325 #undef SPDK_CONFIG_URING 00:06:31.325 #define SPDK_CONFIG_URING_PATH 00:06:31.325 #undef SPDK_CONFIG_URING_ZNS 00:06:31.325 #undef SPDK_CONFIG_USDT 00:06:31.325 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:31.325 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:31.325 #undef SPDK_CONFIG_VFIO_USER 00:06:31.325 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:31.325 #define SPDK_CONFIG_VHOST 1 00:06:31.325 #define SPDK_CONFIG_VIRTIO 1 00:06:31.325 #undef SPDK_CONFIG_VTUNE 00:06:31.325 #define SPDK_CONFIG_VTUNE_DIR 00:06:31.325 #define SPDK_CONFIG_WERROR 1 00:06:31.325 #define SPDK_CONFIG_WPDK_DIR 00:06:31.325 #undef SPDK_CONFIG_XNVME 00:06:31.325 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:31.325 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:31.325 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.325 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:31.325 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.325 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.325 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:31.325 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:31.325 ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:31.325 ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:31.325 ++++ export PATH 00:06:31.325 ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:31.325 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:31.325 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:31.325 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:31.325 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:31.325 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:31.325 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:31.325 +++ TEST_TAG=N/A 00:06:31.325 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:31.325 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:31.325 ++++ uname -s 00:06:31.325 +++ PM_OS=Linux 00:06:31.325 +++ MONITOR_RESOURCES_SUDO=() 00:06:31.325 +++ declare -A MONITOR_RESOURCES_SUDO 00:06:31.325 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:31.325 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:31.325 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:31.325 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:31.325 +++ SUDO[0]= 00:06:31.325 +++ SUDO[1]='sudo -E' 00:06:31.325 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:31.325 +++ [[ Linux == FreeBSD ]] 00:06:31.325 +++ [[ Linux == Linux ]] 00:06:31.325 +++ [[ QEMU != QEMU ]] 00:06:31.325 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:31.325 ++ : 1 00:06:31.325 ++ export RUN_NIGHTLY 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_RUN_VALGRIND 00:06:31.325 ++ : 1 00:06:31.325 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:31.325 ++ : 1 00:06:31.325 ++ export SPDK_TEST_UNITTEST 00:06:31.325 ++ : 00:06:31.325 ++ export SPDK_TEST_AUTOBUILD 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_RELEASE_BUILD 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_ISAL 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_ISCSI 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:31.325 ++ : 1 00:06:31.325 ++ export SPDK_TEST_NVME 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_NVME_PMR 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_NVME_BP 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_NVME_CLI 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_NVME_CUSE 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_NVME_FDP 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_NVMF 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_VFIOUSER 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_FUZZER 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_FUZZER_SHORT 00:06:31.325 ++ : rdma 00:06:31.325 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_RBD 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_VHOST 00:06:31.325 ++ : 1 00:06:31.325 ++ export SPDK_TEST_BLOCKDEV 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_IOAT 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_BLOBFS 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_VHOST_INIT 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_LVOL 00:06:31.325 ++ : 0 00:06:31.325 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:31.325 ++ : 1 00:06:31.326 ++ export SPDK_RUN_ASAN 00:06:31.326 ++ : 1 00:06:31.326 ++ export SPDK_RUN_UBSAN 00:06:31.326 ++ : /home/vagrant/spdk_repo/dpdk/build 00:06:31.326 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_RUN_NON_ROOT 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_CRYPTO 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_FTL 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_OCF 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_VMD 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_OPAL 00:06:31.326 ++ : v22.11.4 00:06:31.326 ++ export SPDK_TEST_NATIVE_DPDK 00:06:31.326 ++ : true 00:06:31.326 ++ export SPDK_AUTOTEST_X 00:06:31.326 ++ : 1 00:06:31.326 ++ export SPDK_TEST_RAID5 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_URING 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_USDT 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_USE_IGB_UIO 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_SCHEDULER 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_SCANBUILD 00:06:31.326 ++ : 00:06:31.326 ++ export SPDK_TEST_NVMF_NICS 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_SMA 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_DAOS 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_XNVME 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_ACCEL_DSA 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_ACCEL_IAA 00:06:31.326 ++ : 00:06:31.326 ++ export SPDK_TEST_FUZZER_TARGET 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_TEST_NVMF_MDNS 00:06:31.326 ++ : 0 00:06:31.326 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:31.326 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:31.326 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:31.326 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:31.326 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:31.326 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:31.326 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:31.326 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:31.326 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:31.326 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:31.326 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:31.326 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.326 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.326 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:31.326 ++ PYTHONDONTWRITEBYTECODE=1 00:06:31.326 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:31.326 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:31.326 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:31.326 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:31.326 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:31.326 ++ rm -rf /var/tmp/asan_suppression_file 00:06:31.598 ++ cat 00:06:31.598 ++ echo leak:libfuse3.so 00:06:31.598 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:31.598 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:31.598 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:31.598 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:31.598 ++ '[' -z /var/spdk/dependencies ']' 00:06:31.598 ++ export DEPENDENCY_DIR 00:06:31.598 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:31.598 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:31.598 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:31.598 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:31.598 ++ export QEMU_BIN= 00:06:31.598 ++ QEMU_BIN= 00:06:31.598 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:31.598 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:31.598 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:31.598 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:31.598 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:31.598 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:31.598 ++ '[' 0 -eq 0 ']' 00:06:31.598 ++ export valgrind= 00:06:31.598 ++ valgrind= 00:06:31.598 +++ uname -s 00:06:31.598 ++ '[' Linux = Linux ']' 00:06:31.598 ++ HUGEMEM=4096 00:06:31.598 ++ export CLEAR_HUGE=yes 00:06:31.598 ++ CLEAR_HUGE=yes 00:06:31.598 ++ [[ 0 -eq 1 ]] 00:06:31.598 ++ [[ 0 -eq 1 ]] 00:06:31.598 ++ MAKE=make 00:06:31.598 +++ nproc 00:06:31.598 ++ MAKEFLAGS=-j10 00:06:31.598 ++ export HUGEMEM=4096 00:06:31.598 ++ HUGEMEM=4096 00:06:31.598 ++ NO_HUGE=() 00:06:31.598 ++ TEST_MODE= 00:06:31.598 ++ [[ -z '' ]] 00:06:31.598 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:31.598 ++ exec 00:06:31.598 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:31.598 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:31.598 ++ set_test_storage 2147483648 00:06:31.598 ++ [[ -v testdir ]] 00:06:31.598 ++ local requested_size=2147483648 00:06:31.598 ++ local mount target_dir 00:06:31.598 ++ local -A mounts fss sizes avails uses 00:06:31.598 ++ local source fs size avail mount use 00:06:31.598 ++ local storage_fallback storage_candidates 00:06:31.598 +++ mktemp -udt spdk.XXXXXX 00:06:31.598 ++ storage_fallback=/tmp/spdk.KIhiA6 00:06:31.598 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:31.598 ++ [[ -n '' ]] 00:06:31.598 ++ [[ -n '' ]] 00:06:31.598 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.KIhiA6/tests/unit /tmp/spdk.KIhiA6 00:06:31.598 ++ requested_size=2214592512 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 +++ df -T 00:06:31.598 +++ grep -v Filesystem 00:06:31.598 ++ mounts["$mount"]=tmpfs 00:06:31.598 ++ fss["$mount"]=tmpfs 00:06:31.598 ++ avails["$mount"]=1252958208 00:06:31.598 ++ sizes["$mount"]=1254027264 00:06:31.598 ++ uses["$mount"]=1069056 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 ++ mounts["$mount"]=/dev/vda1 00:06:31.598 ++ fss["$mount"]=ext4 00:06:31.598 ++ avails["$mount"]=9135882240 00:06:31.598 ++ sizes["$mount"]=19681529856 00:06:31.598 ++ uses["$mount"]=10528870400 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 ++ mounts["$mount"]=tmpfs 00:06:31.598 ++ fss["$mount"]=tmpfs 00:06:31.598 ++ avails["$mount"]=6270119936 00:06:31.598 ++ sizes["$mount"]=6270119936 00:06:31.598 ++ uses["$mount"]=0 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 ++ mounts["$mount"]=tmpfs 00:06:31.598 ++ fss["$mount"]=tmpfs 00:06:31.598 ++ avails["$mount"]=5242880 00:06:31.598 ++ sizes["$mount"]=5242880 00:06:31.598 ++ uses["$mount"]=0 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 ++ mounts["$mount"]=/dev/vda16 00:06:31.598 ++ fss["$mount"]=ext4 00:06:31.598 ++ avails["$mount"]=777306112 00:06:31.598 ++ sizes["$mount"]=923156480 00:06:31.598 ++ uses["$mount"]=81207296 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 ++ mounts["$mount"]=/dev/vda15 00:06:31.598 ++ fss["$mount"]=vfat 00:06:31.598 ++ avails["$mount"]=103000064 00:06:31.598 ++ sizes["$mount"]=109395968 00:06:31.598 ++ uses["$mount"]=6395904 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 ++ mounts["$mount"]=tmpfs 00:06:31.598 ++ fss["$mount"]=tmpfs 00:06:31.598 ++ avails["$mount"]=1254010880 00:06:31.598 ++ sizes["$mount"]=1254023168 00:06:31.598 ++ uses["$mount"]=12288 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:06:31.598 ++ fss["$mount"]=fuse.sshfs 00:06:31.598 ++ avails["$mount"]=92730109952 00:06:31.598 ++ sizes["$mount"]=105088212992 00:06:31.598 ++ uses["$mount"]=6972669952 00:06:31.598 ++ read -r source fs size use avail _ mount 00:06:31.598 ++ printf '* Looking for test storage...\n' 00:06:31.598 * Looking for test storage... 00:06:31.598 ++ local target_space new_size 00:06:31.598 ++ for target_dir in "${storage_candidates[@]}" 00:06:31.598 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:31.598 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:31.598 ++ mount=/ 00:06:31.598 ++ target_space=9135882240 00:06:31.598 ++ (( target_space == 0 || target_space < requested_size )) 00:06:31.598 ++ (( target_space >= requested_size )) 00:06:31.598 ++ [[ ext4 == tmpfs ]] 00:06:31.598 ++ [[ ext4 == ramfs ]] 00:06:31.598 ++ [[ / == / ]] 00:06:31.598 ++ new_size=12743462912 00:06:31.598 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:31.598 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:31.598 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:31.598 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:31.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:31.598 ++ return 0 00:06:31.598 ++ set -o errtrace 00:06:31.598 ++ shopt -s extdebug 00:06:31.598 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:31.598 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:31.598 15:00:26 unittest -- common/autotest_common.sh@1687 -- # true 00:06:31.598 15:00:26 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:31.598 15:00:26 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:31.599 15:00:26 unittest -- common/autotest_common.sh@29 -- # exec 00:06:31.599 15:00:26 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:31.599 15:00:26 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:31.599 15:00:26 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:31.599 15:00:26 unittest -- common/autotest_common.sh@18 -- # set -x 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@181 -- # hash lcov 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:06:31.599 --rc lcov_branch_coverage=1 00:06:31.599 --rc lcov_function_coverage=1 00:06:31.599 --rc genhtml_branch_coverage=1 00:06:31.599 --rc genhtml_function_coverage=1 00:06:31.599 --rc genhtml_legend=1 00:06:31.599 --rc geninfo_all_blocks=1 00:06:31.599 ' 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:06:31.599 --rc lcov_branch_coverage=1 00:06:31.599 --rc lcov_function_coverage=1 00:06:31.599 --rc genhtml_branch_coverage=1 00:06:31.599 --rc genhtml_function_coverage=1 00:06:31.599 --rc genhtml_legend=1 00:06:31.599 --rc geninfo_all_blocks=1 00:06:31.599 ' 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:06:31.599 --rc lcov_branch_coverage=1 00:06:31.599 --rc lcov_function_coverage=1 00:06:31.599 --rc genhtml_branch_coverage=1 00:06:31.599 --rc genhtml_function_coverage=1 00:06:31.599 --rc genhtml_legend=1 00:06:31.599 --rc geninfo_all_blocks=1 00:06:31.599 --no-external' 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:06:31.599 --rc lcov_branch_coverage=1 00:06:31.599 --rc lcov_function_coverage=1 00:06:31.599 --rc genhtml_branch_coverage=1 00:06:31.599 --rc genhtml_function_coverage=1 00:06:31.599 --rc genhtml_legend=1 00:06:31.599 --rc geninfo_all_blocks=1 00:06:31.599 --no-external' 00:06:31.599 15:00:26 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:38.155 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:38.155 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:24.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:24.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:07:24.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:24.870 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:24.870 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:24.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:07:24.871 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:07:24.872 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:24.872 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:07:34.852 15:01:29 unittest -- unit/unittest.sh@208 -- # uname -m 00:07:34.852 15:01:29 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:07:34.852 15:01:29 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 ************************************ 00:07:34.852 START TEST unittest_pci_event 00:07:34.852 ************************************ 00:07:34.852 15:01:29 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:34.852 00:07:34.852 00:07:34.852 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.852 http://cunit.sourceforge.net/ 00:07:34.852 00:07:34.852 00:07:34.852 Suite: pci_event 00:07:34.852 Test: test_pci_parse_event ...[2024-07-23 15:01:29.762159] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:07:34.852 [2024-07-23 15:01:29.762597] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:07:34.852 passed 00:07:34.852 00:07:34.852 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.852 suites 1 1 n/a 0 0 00:07:34.852 tests 1 1 1 0 0 00:07:34.852 asserts 15 15 15 0 n/a 00:07:34.852 00:07:34.852 Elapsed time = 0.001 seconds 00:07:34.852 00:07:34.852 real 0m0.040s 00:07:34.852 user 0m0.014s 00:07:34.852 sys 0m0.020s 00:07:34.852 ************************************ 00:07:34.852 END TEST unittest_pci_event 00:07:34.852 ************************************ 00:07:34.852 15:01:29 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.852 15:01:29 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:34.852 15:01:29 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 ************************************ 00:07:34.852 START TEST unittest_include 00:07:34.852 ************************************ 00:07:34.852 15:01:29 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:34.852 00:07:34.852 00:07:34.852 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.852 http://cunit.sourceforge.net/ 00:07:34.852 00:07:34.852 00:07:34.852 Suite: histogram 00:07:34.852 Test: histogram_test ...passed 00:07:34.852 Test: histogram_merge ...passed 00:07:34.852 00:07:34.852 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.852 suites 1 1 n/a 0 0 00:07:34.852 tests 2 2 2 0 0 00:07:34.852 asserts 50 50 50 0 n/a 00:07:34.852 00:07:34.852 Elapsed time = 0.005 seconds 00:07:34.852 00:07:34.852 real 0m0.043s 00:07:34.852 user 0m0.023s 00:07:34.852 sys 0m0.020s 00:07:34.852 15:01:29 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.852 ************************************ 00:07:34.852 15:01:29 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 END TEST unittest_include 00:07:34.852 ************************************ 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:34.852 15:01:29 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.852 15:01:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 ************************************ 00:07:34.852 START TEST unittest_bdev 00:07:34.852 ************************************ 00:07:34.852 15:01:29 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:07:34.852 15:01:29 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:07:34.852 00:07:34.852 00:07:34.852 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.852 http://cunit.sourceforge.net/ 00:07:34.852 00:07:34.852 00:07:34.852 Suite: bdev 00:07:34.852 Test: bytes_to_blocks_test ...passed 00:07:34.852 Test: num_blocks_test ...passed 00:07:34.853 Test: io_valid_test ...passed 00:07:34.853 Test: open_write_test ...[2024-07-23 15:01:30.004444] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:07:34.853 [2024-07-23 15:01:30.004839] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:07:34.853 [2024-07-23 15:01:30.005000] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:07:34.853 passed 00:07:34.853 Test: claim_test ...passed 00:07:34.853 Test: alias_add_del_test ...[2024-07-23 15:01:30.087172] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:07:34.853 [2024-07-23 15:01:30.087283] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4663:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:07:34.853 [2024-07-23 15:01:30.087332] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:07:34.853 passed 00:07:34.853 Test: get_device_stat_test ...passed 00:07:34.853 Test: bdev_io_types_test ...passed 00:07:34.853 Test: bdev_io_wait_test ...passed 00:07:34.853 Test: bdev_io_spans_split_test ...passed 00:07:34.853 Test: bdev_io_boundary_split_test ...passed 00:07:34.853 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-23 15:01:30.248588] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3214:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:07:34.853 passed 00:07:35.111 Test: bdev_io_mix_split_test ...passed 00:07:35.111 Test: bdev_io_split_with_io_wait ...passed 00:07:35.111 Test: bdev_io_write_unit_split_test ...[2024-07-23 15:01:30.342595] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:35.111 [2024-07-23 15:01:30.342707] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:35.111 [2024-07-23 15:01:30.342733] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:07:35.111 [2024-07-23 15:01:30.342770] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:07:35.111 passed 00:07:35.111 Test: bdev_io_alignment_with_boundary ...passed 00:07:35.111 Test: bdev_io_alignment ...passed 00:07:35.111 Test: bdev_histograms ...passed 00:07:35.111 Test: bdev_write_zeroes ...passed 00:07:35.111 Test: bdev_compare_and_write ...passed 00:07:35.369 Test: bdev_compare ...passed 00:07:35.369 Test: bdev_compare_emulated ...passed 00:07:35.370 Test: bdev_zcopy_write ...passed 00:07:35.370 Test: bdev_zcopy_read ...passed 00:07:35.370 Test: bdev_open_while_hotremove ...passed 00:07:35.370 Test: bdev_close_while_hotremove ...passed 00:07:35.370 Test: bdev_open_ext_test ...passed 00:07:35.370 Test: bdev_open_ext_unregister ...passed 00:07:35.370 Test: bdev_set_io_timeout ...[2024-07-23 15:01:30.714053] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:35.370 [2024-07-23 15:01:30.714264] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:35.370 passed 00:07:35.370 Test: bdev_set_qd_sampling ...passed 00:07:35.370 Test: lba_range_overlap ...passed 00:07:35.628 Test: lock_lba_range_check_ranges ...passed 00:07:35.628 Test: lock_lba_range_with_io_outstanding ...passed 00:07:35.628 Test: lock_lba_range_overlapped ...passed 00:07:35.628 Test: bdev_quiesce ...[2024-07-23 15:01:30.870251] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10186:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:07:35.628 passed 00:07:35.628 Test: bdev_io_abort ...passed 00:07:35.628 Test: bdev_unmap ...passed 00:07:35.628 Test: bdev_write_zeroes_split_test ...passed 00:07:35.628 Test: bdev_set_options_test ...passed 00:07:35.628 Test: bdev_get_memory_domains ...passed 00:07:35.628 Test: bdev_io_ext ...[2024-07-23 15:01:30.982930] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:07:35.628 passed 00:07:35.628 Test: bdev_io_ext_no_opts ...passed 00:07:35.886 Test: bdev_io_ext_invalid_opts ...passed 00:07:35.886 Test: bdev_io_ext_split ...passed 00:07:35.886 Test: bdev_io_ext_bounce_buffer ...passed 00:07:35.886 Test: bdev_register_uuid_alias ...[2024-07-23 15:01:31.138508] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 56ec87d7-e846-4b75-8e10-a6f87560336b already exists 00:07:35.886 [2024-07-23 15:01:31.138588] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:56ec87d7-e846-4b75-8e10-a6f87560336b alias for bdev bdev0 00:07:35.886 passed 00:07:35.886 Test: bdev_unregister_by_name ...passed 00:07:35.886 Test: for_each_bdev_test ...[2024-07-23 15:01:31.163182] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8007:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:07:35.886 [2024-07-23 15:01:31.163250] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8015:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:07:35.886 passed 00:07:35.886 Test: bdev_seek_test ...passed 00:07:35.886 Test: bdev_copy ...passed 00:07:35.886 Test: bdev_copy_split_test ...passed 00:07:35.886 Test: examine_locks ...passed 00:07:35.886 Test: claim_v2_rwo ...[2024-07-23 15:01:31.256504] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:35.886 [2024-07-23 15:01:31.256572] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:35.886 [2024-07-23 15:01:31.256626] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:35.886 [2024-07-23 15:01:31.256650] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:35.886 [2024-07-23 15:01:31.256669] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:35.886 [2024-07-23 15:01:31.256710] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8736:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:07:35.886 passed 00:07:35.886 Test: claim_v2_rom ...[2024-07-23 15:01:31.256870] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:35.886 [2024-07-23 15:01:31.256895] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:35.886 [2024-07-23 15:01:31.256912] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:35.886 [2024-07-23 15:01:31.256925] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:35.886 passed 00:07:35.886 Test: claim_v2_rwm ...[2024-07-23 15:01:31.256966] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8779:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:07:35.887 [2024-07-23 15:01:31.256993] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:35.887 [2024-07-23 15:01:31.257134] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:35.887 [2024-07-23 15:01:31.257166] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.257191] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.257205] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.257221] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.257239] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8829:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.257287] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:35.887 passed 00:07:35.887 Test: claim_v2_existing_writer ...passed 00:07:35.887 Test: claim_v2_existing_v1 ...[2024-07-23 15:01:31.257436] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:35.887 [2024-07-23 15:01:31.257457] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:35.887 [2024-07-23 15:01:31.257560] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:35.887 passed 00:07:35.887 Test: claim_v1_existing_v2 ...[2024-07-23 15:01:31.257597] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.257615] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:35.887 passed 00:07:35.887 Test: examine_claimed ...[2024-07-23 15:01:31.257727] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.257753] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.257813] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:35.887 [2024-07-23 15:01:31.258131] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:07:35.887 passed 00:07:35.887 00:07:35.887 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.887 suites 1 1 n/a 0 0 00:07:35.887 tests 59 59 59 0 0 00:07:35.887 asserts 4599 4599 4599 0 n/a 00:07:35.887 00:07:35.887 Elapsed time = 1.305 seconds 00:07:35.887 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:07:35.887 00:07:35.887 00:07:35.887 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.887 http://cunit.sourceforge.net/ 00:07:35.887 00:07:35.887 00:07:35.887 Suite: nvme 00:07:35.887 Test: test_create_ctrlr ...passed 00:07:35.887 Test: test_reset_ctrlr ...[2024-07-23 15:01:31.312200] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:35.887 passed 00:07:36.145 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:07:36.145 Test: test_failover_ctrlr ...passed 00:07:36.145 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-23 15:01:31.315856] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.145 [2024-07-23 15:01:31.316171] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.145 [2024-07-23 15:01:31.316449] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.145 passed 00:07:36.145 Test: test_pending_reset ...[2024-07-23 15:01:31.318664] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.145 [2024-07-23 15:01:31.319025] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.145 passed 00:07:36.145 Test: test_attach_ctrlr ...[2024-07-23 15:01:31.320524] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:07:36.145 passed 00:07:36.145 Test: test_aer_cb ...passed 00:07:36.145 Test: test_submit_nvme_cmd ...passed 00:07:36.145 Test: test_add_remove_trid ...passed 00:07:36.145 Test: test_abort ...[2024-07-23 15:01:31.325121] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7480:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:07:36.145 passed 00:07:36.145 Test: test_get_io_qpair ...passed 00:07:36.145 Test: test_bdev_unregister ...passed 00:07:36.145 Test: test_compare_ns ...passed 00:07:36.145 Test: test_init_ana_log_page ...passed 00:07:36.145 Test: test_get_memory_domains ...passed 00:07:36.145 Test: test_reconnect_qpair ...[2024-07-23 15:01:31.328747] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.145 passed 00:07:36.145 Test: test_create_bdev_ctrlr ...[2024-07-23 15:01:31.329504] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5407:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:07:36.145 passed 00:07:36.145 Test: test_add_multi_ns_to_bdev ...[2024-07-23 15:01:31.331273] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4574:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:07:36.145 passed 00:07:36.145 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:07:36.145 Test: test_admin_path ...passed 00:07:36.145 Test: test_reset_bdev_ctrlr ...passed 00:07:36.145 Test: test_find_io_path ...passed 00:07:36.145 Test: test_retry_io_if_ana_state_is_updating ...passed 00:07:36.145 Test: test_retry_io_for_io_path_error ...passed 00:07:36.145 Test: test_retry_io_count ...passed 00:07:36.145 Test: test_concurrent_read_ana_log_page ...passed 00:07:36.145 Test: test_retry_io_for_ana_error ...passed 00:07:36.145 Test: test_check_io_error_resiliency_params ...passed 00:07:36.145 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:07:36.146 Test: test_reconnect_ctrlr ...[2024-07-23 15:01:31.340516] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:07:36.146 [2024-07-23 15:01:31.340568] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6108:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:36.146 [2024-07-23 15:01:31.340592] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6117:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:36.146 [2024-07-23 15:01:31.340608] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6120:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:07:36.146 [2024-07-23 15:01:31.340649] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:36.146 [2024-07-23 15:01:31.340675] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:36.146 [2024-07-23 15:01:31.340695] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:07:36.146 [2024-07-23 15:01:31.340712] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6127:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:07:36.146 [2024-07-23 15:01:31.340732] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6124:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:07:36.146 [2024-07-23 15:01:31.341822] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.341994] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.342345] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.342497] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.342652] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 passed 00:07:36.146 Test: test_retry_failover_ctrlr ...[2024-07-23 15:01:31.343104] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 passed 00:07:36.146 Test: test_fail_path ...[2024-07-23 15:01:31.343831] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.344025] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.344188] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.344306] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.344437] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 passed 00:07:36.146 Test: test_nvme_ns_cmp ...passed 00:07:36.146 Test: test_ana_transition ...passed 00:07:36.146 Test: test_set_preferred_path ...passed 00:07:36.146 Test: test_find_next_io_path ...passed 00:07:36.146 Test: test_find_io_path_min_qd ...passed 00:07:36.146 Test: test_disable_auto_failback ...[2024-07-23 15:01:31.347066] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 passed 00:07:36.146 Test: test_set_multipath_policy ...passed 00:07:36.146 Test: test_uuid_generation ...passed 00:07:36.146 Test: test_retry_io_to_same_path ...passed 00:07:36.146 Test: test_race_between_reset_and_disconnected ...passed 00:07:36.146 Test: test_ctrlr_op_rpc ...passed 00:07:36.146 Test: test_bdev_ctrlr_op_rpc ...passed 00:07:36.146 Test: test_disable_enable_ctrlr ...[2024-07-23 15:01:31.352880] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 [2024-07-23 15:01:31.353254] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:36.146 passed 00:07:36.146 Test: test_delete_ctrlr_done ...passed 00:07:36.146 Test: test_ns_remove_during_reset ...passed 00:07:36.146 Test: test_io_path_is_current ...passed 00:07:36.146 00:07:36.146 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.146 suites 1 1 n/a 0 0 00:07:36.146 tests 49 49 49 0 0 00:07:36.146 asserts 3578 3578 3578 0 n/a 00:07:36.146 00:07:36.146 Elapsed time = 0.043 seconds 00:07:36.146 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:07:36.146 00:07:36.146 00:07:36.146 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.146 http://cunit.sourceforge.net/ 00:07:36.146 00:07:36.146 Test Options 00:07:36.146 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:07:36.146 00:07:36.146 Suite: raid 00:07:36.146 Test: test_create_raid ...passed 00:07:36.146 Test: test_create_raid_superblock ...passed 00:07:36.146 Test: test_delete_raid ...passed 00:07:36.146 Test: test_create_raid_invalid_args ...[2024-07-23 15:01:31.410179] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1507:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:07:36.146 [2024-07-23 15:01:31.410529] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1501:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:07:36.146 [2024-07-23 15:01:31.411141] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1491:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:07:36.146 [2024-07-23 15:01:31.411315] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:36.146 [2024-07-23 15:01:31.411347] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:07:36.146 [2024-07-23 15:01:31.412222] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:36.146 [2024-07-23 15:01:31.412259] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:07:36.146 passed 00:07:36.146 Test: test_delete_raid_invalid_args ...passed 00:07:36.146 Test: test_io_channel ...passed 00:07:36.146 Test: test_reset_io ...passed 00:07:36.146 Test: test_multi_raid ...passed 00:07:36.146 Test: test_io_type_supported ...passed 00:07:36.146 Test: test_raid_json_dump_info ...passed 00:07:36.146 Test: test_context_size ...passed 00:07:36.146 Test: test_raid_level_conversions ...passed 00:07:36.146 Test: test_raid_io_split ...passed 00:07:36.146 Test: test_raid_process ...passed 00:07:36.146 Test: test_raid_process_with_qos ...passed 00:07:36.146 00:07:36.146 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.146 suites 1 1 n/a 0 0 00:07:36.146 tests 15 15 15 0 0 00:07:36.146 asserts 6602 6602 6602 0 n/a 00:07:36.146 00:07:36.146 Elapsed time = 0.025 seconds 00:07:36.146 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:07:36.146 00:07:36.146 00:07:36.146 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.146 http://cunit.sourceforge.net/ 00:07:36.146 00:07:36.146 00:07:36.146 Suite: raid_sb 00:07:36.146 Test: test_raid_bdev_write_superblock ...passed 00:07:36.146 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:36.146 Test: test_raid_bdev_parse_superblock ...[2024-07-23 15:01:31.478304] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROpassed 00:07:36.146 Suite: raid_sb_md 00:07:36.146 Test: test_raid_bdev_write_superblock ...passed 00:07:36.146 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:36.146 Test: test_raid_bdev_parse_superblock ...passed 00:07:36.146 Suite: raid_sb_md_interleaved 00:07:36.146 Test: test_raid_bdev_write_superblock ...passed 00:07:36.146 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:36.146 Test: test_raid_bdev_parse_superblock ...passed 00:07:36.146 00:07:36.146 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.146 suites 3 3 n/a 0 0 00:07:36.146 tests 9 9 9 0 0 00:07:36.146 asserts 139 139 139 0 n/a 00:07:36.146 00:07:36.146 Elapsed time = 0.002 secondsR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:36.147 [2024-07-23 15:01:31.478973] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:36.147 [2024-07-23 15:01:31.479403] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:36.147 00:07:36.147 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:07:36.147 00:07:36.147 00:07:36.147 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.147 http://cunit.sourceforge.net/ 00:07:36.147 00:07:36.147 00:07:36.147 Suite: concat 00:07:36.147 Test: test_concat_start ...passed 00:07:36.147 Test: test_concat_rw ...passed 00:07:36.147 Test: test_concat_null_payload ...passed 00:07:36.147 00:07:36.147 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.147 suites 1 1 n/a 0 0 00:07:36.147 tests 3 3 3 0 0 00:07:36.147 asserts 8460 8460 8460 0 n/a 00:07:36.147 00:07:36.147 Elapsed time = 0.014 seconds 00:07:36.147 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:07:36.405 00:07:36.405 00:07:36.405 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.405 http://cunit.sourceforge.net/ 00:07:36.405 00:07:36.405 00:07:36.405 Suite: raid0 00:07:36.405 Test: test_write_io ...passed 00:07:36.405 Test: test_read_io ...passed 00:07:36.405 Test: test_unmap_io ...passed 00:07:36.405 Test: test_io_failure ...passed 00:07:36.405 Suite: raid0_dif 00:07:36.405 Test: test_write_io ...passed 00:07:36.405 Test: test_read_io ...passed 00:07:36.405 Test: test_unmap_io ...passed 00:07:36.405 Test: test_io_failure ...passed 00:07:36.405 00:07:36.405 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.405 suites 2 2 n/a 0 0 00:07:36.405 tests 8 8 8 0 0 00:07:36.405 asserts 368291 368291 368291 0 n/a 00:07:36.405 00:07:36.405 Elapsed time = 0.145 seconds 00:07:36.405 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:07:36.405 00:07:36.405 00:07:36.405 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.405 http://cunit.sourceforge.net/ 00:07:36.405 00:07:36.405 00:07:36.405 Suite: raid1 00:07:36.405 Test: test_raid1_start ...passed 00:07:36.405 Test: test_raid1_read_balancing ...passed 00:07:36.405 Test: test_raid1_write_error ...passed 00:07:36.405 Test: test_raid1_read_error ...passed 00:07:36.405 00:07:36.405 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.405 suites 1 1 n/a 0 0 00:07:36.405 tests 4 4 4 0 0 00:07:36.405 asserts 4374 4374 4374 0 n/a 00:07:36.405 00:07:36.405 Elapsed time = 0.006 seconds 00:07:36.405 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:07:36.664 00:07:36.664 00:07:36.664 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.664 http://cunit.sourceforge.net/ 00:07:36.664 00:07:36.664 00:07:36.664 Suite: zone 00:07:36.664 Test: test_zone_get_operation ...passed 00:07:36.664 Test: test_bdev_zone_get_info ...passed 00:07:36.664 Test: test_bdev_zone_management ...passed 00:07:36.664 Test: test_bdev_zone_append ...passed 00:07:36.664 Test: test_bdev_zone_append_with_md ...passed 00:07:36.664 Test: test_bdev_zone_appendv ...passed 00:07:36.664 Test: test_bdev_zone_appendv_with_md ...passed 00:07:36.664 Test: test_bdev_io_get_append_location ...passed 00:07:36.664 00:07:36.664 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.664 suites 1 1 n/a 0 0 00:07:36.664 tests 8 8 8 0 0 00:07:36.664 asserts 94 94 94 0 n/a 00:07:36.664 00:07:36.664 Elapsed time = 0.001 seconds 00:07:36.664 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:07:36.664 00:07:36.664 00:07:36.664 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.664 http://cunit.sourceforge.net/ 00:07:36.664 00:07:36.664 00:07:36.664 Suite: gpt_parse 00:07:36.664 Test: test_parse_mbr_and_primary ...[2024-07-23 15:01:31.886365] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:36.664 [2024-07-23 15:01:31.886656] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:36.664 [2024-07-23 15:01:31.886755] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:36.664 [2024-07-23 15:01:31.886814] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:36.664 [2024-07-23 15:01:31.886869] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:36.664 [2024-07-23 15:01:31.886904] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:36.664 passed 00:07:36.664 Test: test_parse_secondary ...[2024-07-23 15:01:31.887747] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:36.664 [2024-07-23 15:01:31.887784] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:36.665 [2024-07-23 15:01:31.887845] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:36.665 [2024-07-23 15:01:31.887884] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:36.665 passed 00:07:36.665 Test: test_check_mbr ...passed 00:07:36.665 Test: test_read_header ...[2024-07-23 15:01:31.888643] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:36.665 [2024-07-23 15:01:31.888697] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:36.665 [2024-07-23 15:01:31.888907] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:07:36.665 passed 00:07:36.665 Test: test_read_partitions ...[2024-07-23 15:01:31.888962] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:07:36.665 [2024-07-23 15:01:31.889022] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:07:36.665 [2024-07-23 15:01:31.889078] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:07:36.665 [2024-07-23 15:01:31.889126] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:07:36.665 [2024-07-23 15:01:31.889160] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:07:36.665 [2024-07-23 15:01:31.889287] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:07:36.665 [2024-07-23 15:01:31.889340] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:07:36.665 [2024-07-23 15:01:31.889386] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:07:36.665 [2024-07-23 15:01:31.889420] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:07:36.665 [2024-07-23 15:01:31.889816] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:07:36.665 passed 00:07:36.665 00:07:36.665 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.665 suites 1 1 n/a 0 0 00:07:36.665 tests 5 5 5 0 0 00:07:36.665 asserts 33 33 33 0 n/a 00:07:36.665 00:07:36.665 Elapsed time = 0.004 seconds 00:07:36.665 15:01:31 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:07:36.665 00:07:36.665 00:07:36.665 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.665 http://cunit.sourceforge.net/ 00:07:36.665 00:07:36.665 00:07:36.665 Suite: bdev_part 00:07:36.665 Test: part_test ...passed 00:07:36.665 Test: part_free_test ...[2024-07-23 15:01:31.942464] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 467591b3-b5cb-574d-819a-19b53219eef1 already exists 00:07:36.665 [2024-07-23 15:01:31.942772] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:467591b3-b5cb-574d-819a-19b53219eef1 alias for bdev test1 00:07:36.665 passed 00:07:36.665 Test: part_get_io_channel_test ...passed 00:07:36.665 Test: part_construct_ext ...passed 00:07:36.665 00:07:36.665 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.665 suites 1 1 n/a 0 0 00:07:36.665 tests 4 4 4 0 0 00:07:36.665 asserts 48 48 48 0 n/a 00:07:36.665 00:07:36.665 Elapsed time = 0.047 seconds 00:07:36.665 15:01:32 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:07:36.665 00:07:36.665 00:07:36.665 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.665 http://cunit.sourceforge.net/ 00:07:36.665 00:07:36.665 00:07:36.665 Suite: scsi_nvme_suite 00:07:36.665 Test: scsi_nvme_translate_test ...passed 00:07:36.665 00:07:36.665 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.665 suites 1 1 n/a 0 0 00:07:36.665 tests 1 1 1 0 0 00:07:36.665 asserts 104 104 104 0 n/a 00:07:36.665 00:07:36.665 Elapsed time = 0.000 seconds 00:07:36.665 15:01:32 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:07:36.665 00:07:36.665 00:07:36.665 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.665 http://cunit.sourceforge.net/ 00:07:36.665 00:07:36.665 00:07:36.665 Suite: lvol 00:07:36.665 Test: ut_lvs_init ...[2024-07-23 15:01:32.076615] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:07:36.665 passed 00:07:36.665 Test: ut_lvol_init ...[2024-07-23 15:01:32.077051] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:07:36.665 passed 00:07:36.665 Test: ut_lvol_snapshot ...passed 00:07:36.665 Test: ut_lvol_clone ...passed 00:07:36.665 Test: ut_lvs_destroy ...passed 00:07:36.665 Test: ut_lvs_unload ...passed 00:07:36.665 Test: ut_lvol_resize ...[2024-07-23 15:01:32.079077] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:07:36.665 passed 00:07:36.665 Test: ut_lvol_set_read_only ...passed 00:07:36.665 Test: ut_lvol_hotremove ...passed 00:07:36.665 Test: ut_vbdev_lvol_get_io_channel ...passed 00:07:36.665 Test: ut_vbdev_lvol_io_type_supported ...passed 00:07:36.665 Test: ut_lvol_read_write ...passed 00:07:36.665 Test: ut_vbdev_lvol_submit_request ...passed 00:07:36.665 Test: ut_lvol_examine_config ...passed 00:07:36.665 Test: ut_lvol_examine_disk ...[2024-07-23 15:01:32.079760] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:07:36.665 passed 00:07:36.665 Test: ut_lvol_rename ...passed 00:07:36.665 Test: ut_bdev_finish ...[2024-07-23 15:01:32.081184] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:07:36.665 [2024-07-23 15:01:32.081260] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:07:36.665 passed 00:07:36.665 Test: ut_lvs_rename ...passed 00:07:36.665 Test: ut_lvol_seek ...passed 00:07:36.665 Test: ut_esnap_dev_create ...passed 00:07:36.665 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-23 15:01:32.082101] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:07:36.665 [2024-07-23 15:01:32.082154] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:07:36.665 [2024-07-23 15:01:32.082190] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:07:36.665 [2024-07-23 15:01:32.082348] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:07:36.665 [2024-07-23 15:01:32.082371] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:07:36.665 passed 00:07:36.665 Test: ut_lvol_shallow_copy ...[2024-07-23 15:01:32.082835] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:07:36.665 [2024-07-23 15:01:32.082871] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:07:36.665 passed 00:07:36.665 Test: ut_lvol_set_external_parent ...passed[2024-07-23 15:01:32.082986] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:07:36.665 00:07:36.665 00:07:36.665 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.665 suites 1 1 n/a 0 0 00:07:36.665 tests 23 23 23 0 0 00:07:36.665 asserts 770 770 770 0 n/a 00:07:36.665 00:07:36.665 Elapsed time = 0.006 seconds 00:07:36.924 15:01:32 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:07:36.924 00:07:36.924 00:07:36.924 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.924 http://cunit.sourceforge.net/ 00:07:36.924 00:07:36.924 00:07:36.924 Suite: zone_block 00:07:36.924 Test: test_zone_block_create ...passed 00:07:36.924 Test: test_zone_block_create_invalid ...[2024-07-23 15:01:32.172351] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:07:36.924 passed 00:07:36.924 Test: test_get_zone_info ...[2024-07-23 15:01:32.172841] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-23 15:01:32.173064] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:07:36.924 [2024-07-23 15:01:32.173139] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-23 15:01:32.173396] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:07:36.924 [2024-07-23 15:01:32.173435] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-23 15:01:32.173590] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:07:36.924 [2024-07-23 15:01:32.173618] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-23 15:01:32.174491] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.924 [2024-07-23 15:01:32.174629] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.924 [2024-07-23 15:01:32.174703] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.924 passed 00:07:36.924 Test: test_supported_io_types ...passed 00:07:36.924 Test: test_reset_zone ...[2024-07-23 15:01:32.176436] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.924 [2024-07-23 15:01:32.176666] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.924 passed 00:07:36.924 Test: test_open_zone ...[2024-07-23 15:01:32.177703] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.178678] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.178760] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 passed 00:07:36.925 Test: test_zone_write ...[2024-07-23 15:01:32.179667] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:36.925 [2024-07-23 15:01:32.179724] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.179801] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:36.925 [2024-07-23 15:01:32.179844] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.189827] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:07:36.925 [2024-07-23 15:01:32.189905] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.189990] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:07:36.925 [2024-07-23 15:01:32.190030] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 passed 00:07:36.925 Test: test_zone_read ...[2024-07-23 15:01:32.200031] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:36.925 [2024-07-23 15:01:32.200099] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.200664] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:07:36.925 [2024-07-23 15:01:32.200725] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.200968] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:07:36.925 [2024-07-23 15:01:32.201007] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.201639] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:07:36.925 [2024-07-23 15:01:32.201686] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 passed 00:07:36.925 Test: test_close_zone ...[2024-07-23 15:01:32.202184] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.202258] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 passed 00:07:36.925 Test: test_finish_zone ...[2024-07-23 15:01:32.202492] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.202544] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.203249] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.203315] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 passed 00:07:36.925 Test: test_append_zone ...[2024-07-23 15:01:32.203738] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:36.925 [2024-07-23 15:01:32.203836] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 [2024-07-23 15:01:32.203933] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:36.925 [2024-07-23 15:01:32.203951] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 passed 00:07:36.925 00:07:36.925 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.925 suites 1 1 n/a 0 0 00:07:36.925 tests 11 11 11 0 0 00:07:36.925 asserts 3437 3437 3437 0 n/a 00:07:36.925 00:07:36.925 Elapsed time = 0.051 seconds 00:07:36.925 [2024-07-23 15:01:32.222472] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:36.925 [2024-07-23 15:01:32.222584] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:36.925 15:01:32 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:07:36.925 00:07:36.925 00:07:36.925 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.925 http://cunit.sourceforge.net/ 00:07:36.925 00:07:36.925 00:07:36.925 Suite: bdev 00:07:36.925 Test: basic ...[2024-07-23 15:01:32.318994] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x592a7fdba601): Operation not permitted (rc=-1) 00:07:36.925 [2024-07-23 15:01:32.319518] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x592a7fdba5c0): Operation not permitted (rc=-1) 00:07:36.925 [2024-07-23 15:01:32.319571] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x592a7fdba601): Operation not permitted (rc=-1) 00:07:36.925 passed 00:07:37.183 Test: unregister_and_close ...passed 00:07:37.183 Test: unregister_and_close_different_threads ...passed 00:07:37.183 Test: basic_qos ...passed 00:07:37.183 Test: put_channel_during_reset ...passed 00:07:37.183 Test: aborted_reset ...passed 00:07:37.183 Test: aborted_reset_no_outstanding_io ...passed 00:07:37.183 Test: io_during_reset ...passed 00:07:37.441 Test: reset_completions ...passed 00:07:37.441 Test: io_during_qos_queue ...passed 00:07:37.441 Test: io_during_qos_reset ...passed 00:07:37.441 Test: enomem ...passed 00:07:37.441 Test: enomem_multi_bdev ...passed 00:07:37.441 Test: enomem_multi_bdev_unregister ...passed 00:07:37.699 Test: enomem_multi_io_target ...passed 00:07:37.699 Test: qos_dynamic_enable ...passed 00:07:37.699 Test: bdev_histograms_mt ...passed 00:07:37.699 Test: bdev_set_io_timeout_mt ...passed 00:07:37.699 Test: lock_lba_range_then_submit_io ...[2024-07-23 15:01:33.002242] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered 00:07:37.699 [2024-07-23 15:01:33.009635] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x592a7fdba580 already registered (old:0x5130000003c0 new:0x513000000c80) 00:07:37.699 passed 00:07:37.699 Test: unregister_during_reset ...passed 00:07:37.699 Test: event_notify_and_close ...passed 00:07:37.958 Test: unregister_and_qos_poller ...passed 00:07:37.958 Suite: bdev_wrong_thread 00:07:37.958 Test: spdk_bdev_register_wt ...passed 00:07:37.958 Test: spdk_bdev_examine_wt ...[2024-07-23 15:01:33.139634] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8535:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x519000158b80 (0x519000158b80) 00:07:37.958 [2024-07-23 15:01:33.139910] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x519000158b80 (0x519000158b80) 00:07:37.958 passed 00:07:37.958 00:07:37.958 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.958 suites 2 2 n/a 0 0 00:07:37.958 tests 24 24 24 0 0 00:07:37.958 asserts 621 621 621 0 n/a 00:07:37.958 00:07:37.958 Elapsed time = 0.830 seconds 00:07:37.958 ************************************ 00:07:37.958 END TEST unittest_bdev 00:07:37.958 ************************************ 00:07:37.958 00:07:37.958 real 0m3.251s 00:07:37.958 user 0m1.446s 00:07:37.958 sys 0m1.800s 00:07:37.958 15:01:33 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.958 15:01:33 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:37.958 15:01:33 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:37.958 15:01:33 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.958 15:01:33 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.958 15:01:33 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.958 15:01:33 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:37.958 15:01:33 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:37.958 15:01:33 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.958 15:01:33 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.958 15:01:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:37.958 ************************************ 00:07:37.958 START TEST unittest_bdev_raid5f 00:07:37.958 ************************************ 00:07:37.958 15:01:33 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:37.958 00:07:37.958 00:07:37.958 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.958 http://cunit.sourceforge.net/ 00:07:37.958 00:07:37.958 00:07:37.958 Suite: raid5f 00:07:37.958 Test: test_raid5f_start ...passed 00:07:38.892 Test: test_raid5f_submit_read_request ...passed 00:07:39.150 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:07:44.497 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:08:11.045 Test: test_raid5f_chunk_write_error ...passed 00:08:23.248 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:08:26.533 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:09:05.233 Test: test_raid5f_submit_read_request_degraded ...passed 00:09:05.233 00:09:05.233 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.233 suites 1 1 n/a 0 0 00:09:05.233 tests 8 8 8 0 0 00:09:05.233 asserts 518158 518158 518158 0 n/a 00:09:05.233 00:09:05.233 Elapsed time = 86.556 seconds 00:09:05.233 00:09:05.233 real 1m26.701s 00:09:05.233 user 1m21.491s 00:09:05.233 sys 0m5.171s 00:09:05.233 ************************************ 00:09:05.233 END TEST unittest_bdev_raid5f 00:09:05.233 ************************************ 00:09:05.233 15:02:59 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.233 15:02:59 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:09:05.233 15:02:59 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:05.233 15:02:59 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:09:05.233 15:02:59 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.233 15:02:59 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.233 15:02:59 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:05.233 ************************************ 00:09:05.233 START TEST unittest_blob_blobfs 00:09:05.233 ************************************ 00:09:05.233 15:03:00 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:09:05.233 15:03:00 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:09:05.233 15:03:00 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:09:05.233 00:09:05.233 00:09:05.233 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.233 http://cunit.sourceforge.net/ 00:09:05.233 00:09:05.233 00:09:05.233 Suite: blob_nocopy_noextent 00:09:05.233 Test: blob_init ...[2024-07-23 15:03:00.037359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:05.233 passed 00:09:05.233 Test: blob_thin_provision ...passed 00:09:05.233 Test: blob_read_only ...passed 00:09:05.233 Test: bs_load ...[2024-07-23 15:03:00.162437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:05.233 passed 00:09:05.233 Test: bs_load_custom_cluster_size ...passed 00:09:05.233 Test: bs_load_after_failed_grow ...passed 00:09:05.233 Test: bs_cluster_sz ...[2024-07-23 15:03:00.191666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:05.233 [2024-07-23 15:03:00.192085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:05.233 [2024-07-23 15:03:00.192155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:05.233 passed 00:09:05.233 Test: bs_resize_md ...passed 00:09:05.233 Test: bs_destroy ...passed 00:09:05.233 Test: bs_type ...passed 00:09:05.233 Test: bs_super_block ...passed 00:09:05.233 Test: bs_test_recover_cluster_count ...passed 00:09:05.233 Test: bs_grow_live ...passed 00:09:05.233 Test: bs_grow_live_no_space ...passed 00:09:05.233 Test: bs_test_grow ...passed 00:09:05.233 Test: blob_serialize_test ...passed 00:09:05.233 Test: super_block_crc ...passed 00:09:05.233 Test: blob_thin_prov_write_count_io ...passed 00:09:05.233 Test: blob_thin_prov_unmap_cluster ...passed 00:09:05.233 Test: bs_load_iter_test ...passed 00:09:05.233 Test: blob_relations ...[2024-07-23 15:03:00.395585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:05.233 [2024-07-23 15:03:00.395914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 [2024-07-23 15:03:00.397101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:05.234 [2024-07-23 15:03:00.397287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 passed 00:09:05.234 Test: blob_relations2 ...[2024-07-23 15:03:00.412273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:05.234 [2024-07-23 15:03:00.412551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 [2024-07-23 15:03:00.412681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:05.234 [2024-07-23 15:03:00.412738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 [2024-07-23 15:03:00.414351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:05.234 [2024-07-23 15:03:00.414538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 [2024-07-23 15:03:00.415118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:05.234 [2024-07-23 15:03:00.415279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 passed 00:09:05.234 Test: blob_relations3 ...passed 00:09:05.234 Test: blobstore_clean_power_failure ...passed 00:09:05.234 Test: blob_delete_snapshot_power_failure ...[2024-07-23 15:03:00.574484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:05.234 [2024-07-23 15:03:00.587250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:05.234 [2024-07-23 15:03:00.587523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:05.234 [2024-07-23 15:03:00.587562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 [2024-07-23 15:03:00.600154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:05.234 [2024-07-23 15:03:00.600231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:05.234 [2024-07-23 15:03:00.600256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:05.234 [2024-07-23 15:03:00.600283] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 [2024-07-23 15:03:00.612887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:05.234 [2024-07-23 15:03:00.612995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 [2024-07-23 15:03:00.625657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:05.234 [2024-07-23 15:03:00.625776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.234 [2024-07-23 15:03:00.638662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:05.234 [2024-07-23 15:03:00.638762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:05.497 passed 00:09:05.497 Test: blob_create_snapshot_power_failure ...[2024-07-23 15:03:00.676485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:05.497 [2024-07-23 15:03:00.701089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:05.497 [2024-07-23 15:03:00.713605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:05.497 passed 00:09:05.497 Test: blob_io_unit ...passed 00:09:05.497 Test: blob_io_unit_compatibility ...passed 00:09:05.497 Test: blob_ext_md_pages ...passed 00:09:05.497 Test: blob_esnap_io_4096_4096 ...passed 00:09:05.497 Test: blob_esnap_io_512_512 ...passed 00:09:05.497 Test: blob_esnap_io_4096_512 ...passed 00:09:05.497 Test: blob_esnap_io_512_4096 ...passed 00:09:05.754 Test: blob_esnap_clone_resize ...passed 00:09:05.754 Suite: blob_bs_nocopy_noextent 00:09:05.754 Test: blob_open ...passed 00:09:05.754 Test: blob_create ...[2024-07-23 15:03:00.997238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:05.755 passed 00:09:05.755 Test: blob_create_loop ...passed 00:09:05.755 Test: blob_create_fail ...[2024-07-23 15:03:01.094507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:05.755 passed 00:09:05.755 Test: blob_create_internal ...passed 00:09:05.755 Test: blob_create_zero_extent ...passed 00:09:06.013 Test: blob_snapshot ...passed 00:09:06.013 Test: blob_clone ...passed 00:09:06.013 Test: blob_inflate ...[2024-07-23 15:03:01.282318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:06.013 passed 00:09:06.013 Test: blob_delete ...passed 00:09:06.013 Test: blob_resize_test ...[2024-07-23 15:03:01.348803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:06.013 passed 00:09:06.013 Test: blob_resize_thin_test ...passed 00:09:06.013 Test: channel_ops ...passed 00:09:06.271 Test: blob_super ...passed 00:09:06.271 Test: blob_rw_verify_iov ...passed 00:09:06.271 Test: blob_unmap ...passed 00:09:06.271 Test: blob_iter ...passed 00:09:06.271 Test: blob_parse_md ...passed 00:09:06.271 Test: bs_load_pending_removal ...passed 00:09:06.271 Test: bs_unload ...[2024-07-23 15:03:01.653575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:06.271 passed 00:09:06.271 Test: bs_usable_clusters ...passed 00:09:06.530 Test: blob_crc ...[2024-07-23 15:03:01.719955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:06.530 [2024-07-23 15:03:01.720064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:06.530 passed 00:09:06.530 Test: blob_flags ...passed 00:09:06.530 Test: bs_version ...passed 00:09:06.530 Test: blob_set_xattrs_test ...[2024-07-23 15:03:01.822058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:06.530 [2024-07-23 15:03:01.822150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:06.530 passed 00:09:06.788 Test: blob_thin_prov_alloc ...passed 00:09:06.788 Test: blob_insert_cluster_msg_test ...passed 00:09:06.788 Test: blob_thin_prov_rw ...passed 00:09:06.788 Test: blob_thin_prov_rle ...passed 00:09:06.788 Test: blob_thin_prov_rw_iov ...passed 00:09:06.788 Test: blob_snapshot_rw ...passed 00:09:06.788 Test: blob_snapshot_rw_iov ...passed 00:09:07.046 Test: blob_inflate_rw ...passed 00:09:07.304 Test: blob_snapshot_freeze_io ...passed 00:09:07.304 Test: blob_operation_split_rw ...passed 00:09:07.562 Test: blob_operation_split_rw_iov ...passed 00:09:07.562 Test: blob_simultaneous_operations ...[2024-07-23 15:03:02.824508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:07.562 [2024-07-23 15:03:02.824598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:07.562 [2024-07-23 15:03:02.826034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:07.562 [2024-07-23 15:03:02.826080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:07.562 [2024-07-23 15:03:02.840169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:07.562 [2024-07-23 15:03:02.840248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:07.562 [2024-07-23 15:03:02.840371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:07.562 [2024-07-23 15:03:02.840392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:07.562 passed 00:09:07.562 Test: blob_persist_test ...passed 00:09:07.562 Test: blob_decouple_snapshot ...passed 00:09:07.820 Test: blob_seek_io_unit ...passed 00:09:07.820 Test: blob_nested_freezes ...passed 00:09:07.820 Test: blob_clone_resize ...passed 00:09:07.820 Test: blob_shallow_copy ...[2024-07-23 15:03:03.129274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:07.820 [2024-07-23 15:03:03.129576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:07.820 [2024-07-23 15:03:03.129813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:07.820 passed 00:09:07.820 Suite: blob_blob_nocopy_noextent 00:09:07.820 Test: blob_write ...passed 00:09:07.820 Test: blob_read ...passed 00:09:08.079 Test: blob_rw_verify ...passed 00:09:08.079 Test: blob_rw_verify_iov_nomem ...passed 00:09:08.079 Test: blob_rw_iov_read_only ...passed 00:09:08.079 Test: blob_xattr ...passed 00:09:08.079 Test: blob_dirty_shutdown ...passed 00:09:08.079 Test: blob_is_degraded ...passed 00:09:08.079 Suite: blob_esnap_bs_nocopy_noextent 00:09:08.079 Test: blob_esnap_create ...passed 00:09:08.079 Test: blob_esnap_thread_add_remove ...passed 00:09:08.337 Test: blob_esnap_clone_snapshot ...passed 00:09:08.337 Test: blob_esnap_clone_inflate ...passed 00:09:08.337 Test: blob_esnap_clone_decouple ...passed 00:09:08.337 Test: blob_esnap_clone_reload ...passed 00:09:08.337 Test: blob_esnap_hotplug ...passed 00:09:08.337 Test: blob_set_parent ...[2024-07-23 15:03:03.684228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:08.337 [2024-07-23 15:03:03.684331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:08.337 [2024-07-23 15:03:03.684523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:08.337 [2024-07-23 15:03:03.684555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:08.337 [2024-07-23 15:03:03.685147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:08.337 passed 00:09:08.337 Test: blob_set_external_parent ...[2024-07-23 15:03:03.718774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:08.337 [2024-07-23 15:03:03.718867] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:08.337 [2024-07-23 15:03:03.718913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:08.337 [2024-07-23 15:03:03.719510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:08.337 passed 00:09:08.337 Suite: blob_nocopy_extent 00:09:08.337 Test: blob_init ...[2024-07-23 15:03:03.731114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:08.337 passed 00:09:08.337 Test: blob_thin_provision ...passed 00:09:08.595 Test: blob_read_only ...passed 00:09:08.595 Test: bs_load ...[2024-07-23 15:03:03.777288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:08.595 passed 00:09:08.595 Test: bs_load_custom_cluster_size ...passed 00:09:08.595 Test: bs_load_after_failed_grow ...passed 00:09:08.595 Test: bs_cluster_sz ...[2024-07-23 15:03:03.803513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:08.595 [2024-07-23 15:03:03.803822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:08.595 [2024-07-23 15:03:03.803887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:08.595 passed 00:09:08.595 Test: bs_resize_md ...passed 00:09:08.595 Test: bs_destroy ...passed 00:09:08.595 Test: bs_type ...passed 00:09:08.595 Test: bs_super_block ...passed 00:09:08.595 Test: bs_test_recover_cluster_count ...passed 00:09:08.595 Test: bs_grow_live ...passed 00:09:08.595 Test: bs_grow_live_no_space ...passed 00:09:08.595 Test: bs_test_grow ...passed 00:09:08.595 Test: blob_serialize_test ...passed 00:09:08.595 Test: super_block_crc ...passed 00:09:08.595 Test: blob_thin_prov_write_count_io ...passed 00:09:08.595 Test: blob_thin_prov_unmap_cluster ...passed 00:09:08.595 Test: bs_load_iter_test ...passed 00:09:08.595 Test: blob_relations ...[2024-07-23 15:03:03.986825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:08.595 [2024-07-23 15:03:03.986915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.595 [2024-07-23 15:03:03.987913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:08.595 [2024-07-23 15:03:03.987950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.595 passed 00:09:08.595 Test: blob_relations2 ...[2024-07-23 15:03:04.002310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:08.595 [2024-07-23 15:03:04.002397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.595 [2024-07-23 15:03:04.002426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:08.595 [2024-07-23 15:03:04.002441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.595 [2024-07-23 15:03:04.003980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:08.595 [2024-07-23 15:03:04.004027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.595 [2024-07-23 15:03:04.004463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:08.595 [2024-07-23 15:03:04.004496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.595 passed 00:09:08.595 Test: blob_relations3 ...passed 00:09:08.853 Test: blobstore_clean_power_failure ...passed 00:09:08.853 Test: blob_delete_snapshot_power_failure ...[2024-07-23 15:03:04.160412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:08.853 [2024-07-23 15:03:04.172890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:08.853 [2024-07-23 15:03:04.185358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:08.853 [2024-07-23 15:03:04.185437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:08.853 [2024-07-23 15:03:04.185462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.853 [2024-07-23 15:03:04.198286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:08.853 [2024-07-23 15:03:04.198359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:08.853 [2024-07-23 15:03:04.198384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:08.853 [2024-07-23 15:03:04.198410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.853 [2024-07-23 15:03:04.211069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:08.853 [2024-07-23 15:03:04.211138] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:08.853 [2024-07-23 15:03:04.211161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:08.853 [2024-07-23 15:03:04.211194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.853 [2024-07-23 15:03:04.223976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:08.853 [2024-07-23 15:03:04.224074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.853 [2024-07-23 15:03:04.236871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:08.853 [2024-07-23 15:03:04.236981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.853 [2024-07-23 15:03:04.250066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:08.853 [2024-07-23 15:03:04.250150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:08.853 passed 00:09:09.112 Test: blob_create_snapshot_power_failure ...[2024-07-23 15:03:04.288254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:09.112 [2024-07-23 15:03:04.300548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:09.112 [2024-07-23 15:03:04.324866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:09.112 [2024-07-23 15:03:04.337376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:09.112 passed 00:09:09.112 Test: blob_io_unit ...passed 00:09:09.112 Test: blob_io_unit_compatibility ...passed 00:09:09.112 Test: blob_ext_md_pages ...passed 00:09:09.112 Test: blob_esnap_io_4096_4096 ...passed 00:09:09.112 Test: blob_esnap_io_512_512 ...passed 00:09:09.112 Test: blob_esnap_io_4096_512 ...passed 00:09:09.112 Test: blob_esnap_io_512_4096 ...passed 00:09:09.370 Test: blob_esnap_clone_resize ...passed 00:09:09.370 Suite: blob_bs_nocopy_extent 00:09:09.370 Test: blob_open ...passed 00:09:09.370 Test: blob_create ...[2024-07-23 15:03:04.616235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:09.370 passed 00:09:09.370 Test: blob_create_loop ...passed 00:09:09.370 Test: blob_create_fail ...[2024-07-23 15:03:04.720798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:09.370 passed 00:09:09.370 Test: blob_create_internal ...passed 00:09:09.370 Test: blob_create_zero_extent ...passed 00:09:09.628 Test: blob_snapshot ...passed 00:09:09.628 Test: blob_clone ...passed 00:09:09.628 Test: blob_inflate ...[2024-07-23 15:03:04.906831] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:09.628 passed 00:09:09.628 Test: blob_delete ...passed 00:09:09.628 Test: blob_resize_test ...[2024-07-23 15:03:04.973268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:09.628 passed 00:09:09.628 Test: blob_resize_thin_test ...passed 00:09:09.886 Test: channel_ops ...passed 00:09:09.886 Test: blob_super ...passed 00:09:09.886 Test: blob_rw_verify_iov ...passed 00:09:09.886 Test: blob_unmap ...passed 00:09:09.886 Test: blob_iter ...passed 00:09:09.886 Test: blob_parse_md ...passed 00:09:09.886 Test: bs_load_pending_removal ...passed 00:09:09.886 Test: bs_unload ...[2024-07-23 15:03:05.277239] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:09.886 passed 00:09:10.144 Test: bs_usable_clusters ...passed 00:09:10.144 Test: blob_crc ...[2024-07-23 15:03:05.344184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:10.144 [2024-07-23 15:03:05.344277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:10.144 passed 00:09:10.144 Test: blob_flags ...passed 00:09:10.144 Test: bs_version ...passed 00:09:10.144 Test: blob_set_xattrs_test ...[2024-07-23 15:03:05.444866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:10.144 [2024-07-23 15:03:05.444956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:10.144 passed 00:09:10.402 Test: blob_thin_prov_alloc ...passed 00:09:10.402 Test: blob_insert_cluster_msg_test ...passed 00:09:10.402 Test: blob_thin_prov_rw ...passed 00:09:10.402 Test: blob_thin_prov_rle ...passed 00:09:10.402 Test: blob_thin_prov_rw_iov ...passed 00:09:10.402 Test: blob_snapshot_rw ...passed 00:09:10.402 Test: blob_snapshot_rw_iov ...passed 00:09:10.660 Test: blob_inflate_rw ...passed 00:09:10.917 Test: blob_snapshot_freeze_io ...passed 00:09:10.917 Test: blob_operation_split_rw ...passed 00:09:11.175 Test: blob_operation_split_rw_iov ...passed 00:09:11.175 Test: blob_simultaneous_operations ...[2024-07-23 15:03:06.434096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:11.175 [2024-07-23 15:03:06.434189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:11.175 [2024-07-23 15:03:06.436136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:11.175 [2024-07-23 15:03:06.436183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:11.175 [2024-07-23 15:03:06.449604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:11.175 [2024-07-23 15:03:06.449684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:11.175 [2024-07-23 15:03:06.449810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:11.175 [2024-07-23 15:03:06.449827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:11.175 passed 00:09:11.175 Test: blob_persist_test ...passed 00:09:11.175 Test: blob_decouple_snapshot ...passed 00:09:11.433 Test: blob_seek_io_unit ...passed 00:09:11.433 Test: blob_nested_freezes ...passed 00:09:11.433 Test: blob_clone_resize ...passed 00:09:11.433 Test: blob_shallow_copy ...[2024-07-23 15:03:06.732970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:11.433 [2024-07-23 15:03:06.733257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:11.433 [2024-07-23 15:03:06.733420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:11.433 passed 00:09:11.433 Suite: blob_blob_nocopy_extent 00:09:11.433 Test: blob_write ...passed 00:09:11.433 Test: blob_read ...passed 00:09:11.433 Test: blob_rw_verify ...passed 00:09:11.692 Test: blob_rw_verify_iov_nomem ...passed 00:09:11.692 Test: blob_rw_iov_read_only ...passed 00:09:11.692 Test: blob_xattr ...passed 00:09:11.692 Test: blob_dirty_shutdown ...passed 00:09:11.692 Test: blob_is_degraded ...passed 00:09:11.692 Suite: blob_esnap_bs_nocopy_extent 00:09:11.692 Test: blob_esnap_create ...passed 00:09:11.692 Test: blob_esnap_thread_add_remove ...passed 00:09:11.692 Test: blob_esnap_clone_snapshot ...passed 00:09:11.950 Test: blob_esnap_clone_inflate ...passed 00:09:11.950 Test: blob_esnap_clone_decouple ...passed 00:09:11.950 Test: blob_esnap_clone_reload ...passed 00:09:11.950 Test: blob_esnap_hotplug ...passed 00:09:11.950 Test: blob_set_parent ...[2024-07-23 15:03:07.274019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:11.950 [2024-07-23 15:03:07.274112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:11.950 [2024-07-23 15:03:07.274218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:11.950 [2024-07-23 15:03:07.274244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:11.950 [2024-07-23 15:03:07.274689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:11.950 passed 00:09:11.950 Test: blob_set_external_parent ...[2024-07-23 15:03:07.307647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:11.950 [2024-07-23 15:03:07.307737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:11.950 [2024-07-23 15:03:07.307758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:11.950 [2024-07-23 15:03:07.308116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:11.950 passed 00:09:11.950 Suite: blob_copy_noextent 00:09:11.950 Test: blob_init ...[2024-07-23 15:03:07.319433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:11.950 passed 00:09:11.950 Test: blob_thin_provision ...passed 00:09:11.950 Test: blob_read_only ...passed 00:09:11.950 Test: bs_load ...[2024-07-23 15:03:07.364627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:11.950 passed 00:09:11.950 Test: bs_load_custom_cluster_size ...passed 00:09:12.209 Test: bs_load_after_failed_grow ...passed 00:09:12.209 Test: bs_cluster_sz ...[2024-07-23 15:03:07.388964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:12.209 [2024-07-23 15:03:07.389170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:12.209 [2024-07-23 15:03:07.389223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:12.209 passed 00:09:12.209 Test: bs_resize_md ...passed 00:09:12.209 Test: bs_destroy ...passed 00:09:12.209 Test: bs_type ...passed 00:09:12.209 Test: bs_super_block ...passed 00:09:12.209 Test: bs_test_recover_cluster_count ...passed 00:09:12.209 Test: bs_grow_live ...passed 00:09:12.209 Test: bs_grow_live_no_space ...passed 00:09:12.209 Test: bs_test_grow ...passed 00:09:12.209 Test: blob_serialize_test ...passed 00:09:12.209 Test: super_block_crc ...passed 00:09:12.209 Test: blob_thin_prov_write_count_io ...passed 00:09:12.209 Test: blob_thin_prov_unmap_cluster ...passed 00:09:12.209 Test: bs_load_iter_test ...passed 00:09:12.209 Test: blob_relations ...[2024-07-23 15:03:07.580141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:12.209 [2024-07-23 15:03:07.580241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.209 [2024-07-23 15:03:07.580849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:12.209 [2024-07-23 15:03:07.580880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.209 passed 00:09:12.209 Test: blob_relations2 ...[2024-07-23 15:03:07.594610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:12.209 [2024-07-23 15:03:07.594695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.209 [2024-07-23 15:03:07.594720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:12.209 [2024-07-23 15:03:07.594734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.209 [2024-07-23 15:03:07.595666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:12.209 [2024-07-23 15:03:07.595713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.209 [2024-07-23 15:03:07.596028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:12.209 [2024-07-23 15:03:07.596062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.209 passed 00:09:12.209 Test: blob_relations3 ...passed 00:09:12.474 Test: blobstore_clean_power_failure ...passed 00:09:12.474 Test: blob_delete_snapshot_power_failure ...[2024-07-23 15:03:07.749717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:12.474 [2024-07-23 15:03:07.761700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:12.474 [2024-07-23 15:03:07.761808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:12.474 [2024-07-23 15:03:07.761833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.474 [2024-07-23 15:03:07.773819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:12.474 [2024-07-23 15:03:07.773911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:12.474 [2024-07-23 15:03:07.773931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:12.475 [2024-07-23 15:03:07.773952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.475 [2024-07-23 15:03:07.785889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:12.475 [2024-07-23 15:03:07.785994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.475 [2024-07-23 15:03:07.797987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:12.475 [2024-07-23 15:03:07.798097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.475 [2024-07-23 15:03:07.810318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:12.475 [2024-07-23 15:03:07.810425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:12.475 passed 00:09:12.475 Test: blob_create_snapshot_power_failure ...[2024-07-23 15:03:07.846289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:12.475 [2024-07-23 15:03:07.869681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:09:12.475 [2024-07-23 15:03:07.881750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:12.748 passed 00:09:12.748 Test: blob_io_unit ...passed 00:09:12.748 Test: blob_io_unit_compatibility ...passed 00:09:12.748 Test: blob_ext_md_pages ...passed 00:09:12.748 Test: blob_esnap_io_4096_4096 ...passed 00:09:12.748 Test: blob_esnap_io_512_512 ...passed 00:09:12.748 Test: blob_esnap_io_4096_512 ...passed 00:09:12.748 Test: blob_esnap_io_512_4096 ...passed 00:09:12.748 Test: blob_esnap_clone_resize ...passed 00:09:12.748 Suite: blob_bs_copy_noextent 00:09:12.748 Test: blob_open ...passed 00:09:12.748 Test: blob_create ...[2024-07-23 15:03:08.154084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:12.748 passed 00:09:13.007 Test: blob_create_loop ...passed 00:09:13.007 Test: blob_create_fail ...[2024-07-23 15:03:08.249976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:13.007 passed 00:09:13.007 Test: blob_create_internal ...passed 00:09:13.007 Test: blob_create_zero_extent ...passed 00:09:13.007 Test: blob_snapshot ...passed 00:09:13.007 Test: blob_clone ...passed 00:09:13.007 Test: blob_inflate ...[2024-07-23 15:03:08.419923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:13.007 passed 00:09:13.265 Test: blob_delete ...passed 00:09:13.265 Test: blob_resize_test ...[2024-07-23 15:03:08.484585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:13.265 passed 00:09:13.265 Test: blob_resize_thin_test ...passed 00:09:13.265 Test: channel_ops ...passed 00:09:13.265 Test: blob_super ...passed 00:09:13.265 Test: blob_rw_verify_iov ...passed 00:09:13.265 Test: blob_unmap ...passed 00:09:13.524 Test: blob_iter ...passed 00:09:13.524 Test: blob_parse_md ...passed 00:09:13.524 Test: bs_load_pending_removal ...passed 00:09:13.524 Test: bs_unload ...[2024-07-23 15:03:08.785585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:13.524 passed 00:09:13.524 Test: bs_usable_clusters ...passed 00:09:13.524 Test: blob_crc ...[2024-07-23 15:03:08.851568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:13.524 [2024-07-23 15:03:08.851709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:13.524 passed 00:09:13.524 Test: blob_flags ...passed 00:09:13.524 Test: bs_version ...passed 00:09:13.782 Test: blob_set_xattrs_test ...[2024-07-23 15:03:08.952669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:13.782 [2024-07-23 15:03:08.952765] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:13.782 passed 00:09:13.782 Test: blob_thin_prov_alloc ...passed 00:09:13.782 Test: blob_insert_cluster_msg_test ...passed 00:09:13.782 Test: blob_thin_prov_rw ...passed 00:09:14.041 Test: blob_thin_prov_rle ...passed 00:09:14.041 Test: blob_thin_prov_rw_iov ...passed 00:09:14.041 Test: blob_snapshot_rw ...passed 00:09:14.041 Test: blob_snapshot_rw_iov ...passed 00:09:14.299 Test: blob_inflate_rw ...passed 00:09:14.299 Test: blob_snapshot_freeze_io ...passed 00:09:14.558 Test: blob_operation_split_rw ...passed 00:09:14.558 Test: blob_operation_split_rw_iov ...passed 00:09:14.558 Test: blob_simultaneous_operations ...[2024-07-23 15:03:09.919633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:14.558 [2024-07-23 15:03:09.919708] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:14.558 [2024-07-23 15:03:09.920173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:14.558 [2024-07-23 15:03:09.920203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:14.558 [2024-07-23 15:03:09.923063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:14.558 [2024-07-23 15:03:09.923107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:14.558 [2024-07-23 15:03:09.923201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:14.558 [2024-07-23 15:03:09.923217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:14.558 passed 00:09:14.558 Test: blob_persist_test ...passed 00:09:14.816 Test: blob_decouple_snapshot ...passed 00:09:14.816 Test: blob_seek_io_unit ...passed 00:09:14.816 Test: blob_nested_freezes ...passed 00:09:14.816 Test: blob_clone_resize ...passed 00:09:14.816 Test: blob_shallow_copy ...[2024-07-23 15:03:10.167193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:14.816 [2024-07-23 15:03:10.167487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:14.816 [2024-07-23 15:03:10.167657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:14.816 passed 00:09:14.816 Suite: blob_blob_copy_noextent 00:09:14.816 Test: blob_write ...passed 00:09:15.075 Test: blob_read ...passed 00:09:15.075 Test: blob_rw_verify ...passed 00:09:15.075 Test: blob_rw_verify_iov_nomem ...passed 00:09:15.075 Test: blob_rw_iov_read_only ...passed 00:09:15.075 Test: blob_xattr ...passed 00:09:15.075 Test: blob_dirty_shutdown ...passed 00:09:15.075 Test: blob_is_degraded ...passed 00:09:15.075 Suite: blob_esnap_bs_copy_noextent 00:09:15.075 Test: blob_esnap_create ...passed 00:09:15.333 Test: blob_esnap_thread_add_remove ...passed 00:09:15.333 Test: blob_esnap_clone_snapshot ...passed 00:09:15.333 Test: blob_esnap_clone_inflate ...passed 00:09:15.333 Test: blob_esnap_clone_decouple ...passed 00:09:15.333 Test: blob_esnap_clone_reload ...passed 00:09:15.333 Test: blob_esnap_hotplug ...passed 00:09:15.333 Test: blob_set_parent ...[2024-07-23 15:03:10.705325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:15.333 [2024-07-23 15:03:10.705420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:15.333 [2024-07-23 15:03:10.705519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:15.333 [2024-07-23 15:03:10.705544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:15.334 [2024-07-23 15:03:10.705975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:15.334 passed 00:09:15.334 Test: blob_set_external_parent ...[2024-07-23 15:03:10.739269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:15.334 [2024-07-23 15:03:10.739361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:15.334 [2024-07-23 15:03:10.739382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:15.334 [2024-07-23 15:03:10.739730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:15.334 passed 00:09:15.334 Suite: blob_copy_extent 00:09:15.334 Test: blob_init ...[2024-07-23 15:03:10.751062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:15.592 passed 00:09:15.592 Test: blob_thin_provision ...passed 00:09:15.592 Test: blob_read_only ...passed 00:09:15.592 Test: bs_load ...[2024-07-23 15:03:10.796177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:15.592 passed 00:09:15.592 Test: bs_load_custom_cluster_size ...passed 00:09:15.592 Test: bs_load_after_failed_grow ...passed 00:09:15.592 Test: bs_cluster_sz ...[2024-07-23 15:03:10.820418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:15.592 [2024-07-23 15:03:10.820625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:15.592 [2024-07-23 15:03:10.820662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:15.592 passed 00:09:15.592 Test: bs_resize_md ...passed 00:09:15.592 Test: bs_destroy ...passed 00:09:15.592 Test: bs_type ...passed 00:09:15.592 Test: bs_super_block ...passed 00:09:15.592 Test: bs_test_recover_cluster_count ...passed 00:09:15.592 Test: bs_grow_live ...passed 00:09:15.592 Test: bs_grow_live_no_space ...passed 00:09:15.592 Test: bs_test_grow ...passed 00:09:15.592 Test: blob_serialize_test ...passed 00:09:15.592 Test: super_block_crc ...passed 00:09:15.592 Test: blob_thin_prov_write_count_io ...passed 00:09:15.592 Test: blob_thin_prov_unmap_cluster ...passed 00:09:15.592 Test: bs_load_iter_test ...passed 00:09:15.592 Test: blob_relations ...[2024-07-23 15:03:10.998383] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:15.592 [2024-07-23 15:03:10.998486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.592 [2024-07-23 15:03:10.999127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:15.592 [2024-07-23 15:03:10.999154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.592 passed 00:09:15.592 Test: blob_relations2 ...[2024-07-23 15:03:11.013016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:15.592 [2024-07-23 15:03:11.013107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.592 [2024-07-23 15:03:11.013132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:15.592 [2024-07-23 15:03:11.013149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.592 [2024-07-23 15:03:11.014147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:15.592 [2024-07-23 15:03:11.014193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.593 [2024-07-23 15:03:11.014505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:15.593 [2024-07-23 15:03:11.014535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.593 passed 00:09:15.851 Test: blob_relations3 ...passed 00:09:15.851 Test: blobstore_clean_power_failure ...passed 00:09:15.851 Test: blob_delete_snapshot_power_failure ...[2024-07-23 15:03:11.168346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:15.851 [2024-07-23 15:03:11.180303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:15.851 [2024-07-23 15:03:11.192271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:15.851 [2024-07-23 15:03:11.192363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:15.851 [2024-07-23 15:03:11.192385] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.851 [2024-07-23 15:03:11.204340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:15.851 [2024-07-23 15:03:11.204423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:15.851 [2024-07-23 15:03:11.204442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:15.851 [2024-07-23 15:03:11.204479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.851 [2024-07-23 15:03:11.216324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:15.851 [2024-07-23 15:03:11.216408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:15.851 [2024-07-23 15:03:11.216426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:15.851 [2024-07-23 15:03:11.216463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.851 [2024-07-23 15:03:11.228365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:15.851 [2024-07-23 15:03:11.228463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.851 [2024-07-23 15:03:11.240363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:15.851 [2024-07-23 15:03:11.240487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.851 [2024-07-23 15:03:11.252537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:15.851 [2024-07-23 15:03:11.252633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:15.851 passed 00:09:16.110 Test: blob_create_snapshot_power_failure ...[2024-07-23 15:03:11.288179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:16.110 [2024-07-23 15:03:11.299688] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:16.110 [2024-07-23 15:03:11.322755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:16.110 [2024-07-23 15:03:11.334545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:16.110 passed 00:09:16.110 Test: blob_io_unit ...passed 00:09:16.110 Test: blob_io_unit_compatibility ...passed 00:09:16.110 Test: blob_ext_md_pages ...passed 00:09:16.110 Test: blob_esnap_io_4096_4096 ...passed 00:09:16.110 Test: blob_esnap_io_512_512 ...passed 00:09:16.110 Test: blob_esnap_io_4096_512 ...passed 00:09:16.110 Test: blob_esnap_io_512_4096 ...passed 00:09:16.368 Test: blob_esnap_clone_resize ...passed 00:09:16.369 Suite: blob_bs_copy_extent 00:09:16.369 Test: blob_open ...passed 00:09:16.369 Test: blob_create ...[2024-07-23 15:03:11.602600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:16.369 passed 00:09:16.369 Test: blob_create_loop ...passed 00:09:16.369 Test: blob_create_fail ...[2024-07-23 15:03:11.704707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:16.369 passed 00:09:16.369 Test: blob_create_internal ...passed 00:09:16.369 Test: blob_create_zero_extent ...passed 00:09:16.627 Test: blob_snapshot ...passed 00:09:16.627 Test: blob_clone ...passed 00:09:16.627 Test: blob_inflate ...[2024-07-23 15:03:11.874361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:16.627 passed 00:09:16.627 Test: blob_delete ...passed 00:09:16.627 Test: blob_resize_test ...[2024-07-23 15:03:11.938656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:16.627 passed 00:09:16.627 Test: blob_resize_thin_test ...passed 00:09:16.627 Test: channel_ops ...passed 00:09:16.627 Test: blob_super ...passed 00:09:16.886 Test: blob_rw_verify_iov ...passed 00:09:16.886 Test: blob_unmap ...passed 00:09:16.886 Test: blob_iter ...passed 00:09:16.886 Test: blob_parse_md ...passed 00:09:16.886 Test: bs_load_pending_removal ...passed 00:09:16.886 Test: bs_unload ...[2024-07-23 15:03:12.235830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:16.887 passed 00:09:16.887 Test: bs_usable_clusters ...passed 00:09:16.887 Test: blob_crc ...[2024-07-23 15:03:12.300894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:16.887 [2024-07-23 15:03:12.301004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:16.887 passed 00:09:17.145 Test: blob_flags ...passed 00:09:17.145 Test: bs_version ...passed 00:09:17.145 Test: blob_set_xattrs_test ...[2024-07-23 15:03:12.400241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:17.145 [2024-07-23 15:03:12.400338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:17.145 passed 00:09:17.145 Test: blob_thin_prov_alloc ...passed 00:09:17.404 Test: blob_insert_cluster_msg_test ...passed 00:09:17.404 Test: blob_thin_prov_rw ...passed 00:09:17.404 Test: blob_thin_prov_rle ...passed 00:09:17.404 Test: blob_thin_prov_rw_iov ...passed 00:09:17.404 Test: blob_snapshot_rw ...passed 00:09:17.404 Test: blob_snapshot_rw_iov ...passed 00:09:17.662 Test: blob_inflate_rw ...passed 00:09:17.662 Test: blob_snapshot_freeze_io ...passed 00:09:17.920 Test: blob_operation_split_rw ...passed 00:09:17.921 Test: blob_operation_split_rw_iov ...passed 00:09:17.921 Test: blob_simultaneous_operations ...[2024-07-23 15:03:13.337508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:17.921 [2024-07-23 15:03:13.337622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:17.921 [2024-07-23 15:03:13.338123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:17.921 [2024-07-23 15:03:13.338165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:17.921 [2024-07-23 15:03:13.340861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:17.921 [2024-07-23 15:03:13.340910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:17.921 [2024-07-23 15:03:13.341005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:17.921 [2024-07-23 15:03:13.341021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:18.178 passed 00:09:18.178 Test: blob_persist_test ...passed 00:09:18.178 Test: blob_decouple_snapshot ...passed 00:09:18.178 Test: blob_seek_io_unit ...passed 00:09:18.178 Test: blob_nested_freezes ...passed 00:09:18.178 Test: blob_clone_resize ...passed 00:09:18.178 Test: blob_shallow_copy ...[2024-07-23 15:03:13.578861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:18.178 [2024-07-23 15:03:13.579157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:18.178 [2024-07-23 15:03:13.579329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:18.178 passed 00:09:18.178 Suite: blob_blob_copy_extent 00:09:18.436 Test: blob_write ...passed 00:09:18.436 Test: blob_read ...passed 00:09:18.436 Test: blob_rw_verify ...passed 00:09:18.436 Test: blob_rw_verify_iov_nomem ...passed 00:09:18.436 Test: blob_rw_iov_read_only ...passed 00:09:18.436 Test: blob_xattr ...passed 00:09:18.436 Test: blob_dirty_shutdown ...passed 00:09:18.695 Test: blob_is_degraded ...passed 00:09:18.695 Suite: blob_esnap_bs_copy_extent 00:09:18.695 Test: blob_esnap_create ...passed 00:09:18.695 Test: blob_esnap_thread_add_remove ...passed 00:09:18.695 Test: blob_esnap_clone_snapshot ...passed 00:09:18.695 Test: blob_esnap_clone_inflate ...passed 00:09:18.695 Test: blob_esnap_clone_decouple ...passed 00:09:18.695 Test: blob_esnap_clone_reload ...passed 00:09:18.953 Test: blob_esnap_hotplug ...passed 00:09:18.953 Test: blob_set_parent ...[2024-07-23 15:03:14.154081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:18.953 [2024-07-23 15:03:14.154197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:18.953 [2024-07-23 15:03:14.154404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:18.953 [2024-07-23 15:03:14.154453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:18.953 [2024-07-23 15:03:14.155631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:18.953 passed 00:09:18.953 Test: blob_set_external_parent ...[2024-07-23 15:03:14.192059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:18.953 [2024-07-23 15:03:14.192154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:18.953 [2024-07-23 15:03:14.192198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:18.953 [2024-07-23 15:03:14.192664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:18.953 passed 00:09:18.953 00:09:18.953 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.953 suites 16 16 n/a 0 0 00:09:18.953 tests 376 376 376 0 0 00:09:18.953 asserts 143973 143973 143973 0 n/a 00:09:18.953 00:09:18.953 Elapsed time = 14.158 seconds 00:09:18.953 15:03:14 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:09:18.953 00:09:18.953 00:09:18.953 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.953 http://cunit.sourceforge.net/ 00:09:18.953 00:09:18.953 00:09:18.953 Suite: blob_bdev 00:09:18.953 Test: create_bs_dev ...passed 00:09:18.953 Test: create_bs_dev_ro ...[2024-07-23 15:03:14.324556] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:09:18.953 passed 00:09:18.953 Test: create_bs_dev_rw ...passed 00:09:18.953 Test: claim_bs_dev ...[2024-07-23 15:03:14.325167] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:09:18.953 passed 00:09:18.953 Test: claim_bs_dev_ro ...passed 00:09:18.953 Test: deferred_destroy_refs ...passed 00:09:18.953 Test: deferred_destroy_channels ...passed 00:09:18.953 Test: deferred_destroy_threads ...passed 00:09:18.953 00:09:18.953 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.953 suites 1 1 n/a 0 0 00:09:18.953 tests 8 8 8 0 0 00:09:18.953 asserts 119 119 119 0 n/a 00:09:18.953 00:09:18.953 Elapsed time = 0.002 seconds 00:09:18.953 15:03:14 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:09:18.953 00:09:18.953 00:09:18.953 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.953 http://cunit.sourceforge.net/ 00:09:18.953 00:09:18.953 00:09:18.953 Suite: tree 00:09:18.953 Test: blobfs_tree_op_test ...passed 00:09:18.953 00:09:18.953 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.953 suites 1 1 n/a 0 0 00:09:18.953 tests 1 1 1 0 0 00:09:18.953 asserts 27 27 27 0 n/a 00:09:18.953 00:09:18.953 Elapsed time = 0.000 seconds 00:09:18.953 15:03:14 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:09:19.211 00:09:19.211 00:09:19.211 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.211 http://cunit.sourceforge.net/ 00:09:19.211 00:09:19.211 00:09:19.211 Suite: blobfs_async_ut 00:09:19.211 Test: fs_init ...passed 00:09:19.211 Test: fs_open ...passed 00:09:19.211 Test: fs_create ...passed 00:09:19.211 Test: fs_truncate ...passed 00:09:19.211 Test: fs_rename ...passed 00:09:19.211 Test: fs_rw_async ...[2024-07-23 15:03:14.556802] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:09:19.211 passed 00:09:19.211 Test: fs_writev_readv_async ...passed 00:09:19.211 Test: tree_find_buffer_ut ...passed 00:09:19.211 Test: channel_ops ...passed 00:09:19.211 Test: channel_ops_sync ...passed 00:09:19.211 00:09:19.211 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.211 suites 1 1 n/a 0 0 00:09:19.211 tests 10 10 10 0 0 00:09:19.211 asserts 292 292 292 0 n/a 00:09:19.211 00:09:19.211 Elapsed time = 0.207 seconds 00:09:19.468 15:03:14 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:09:19.468 00:09:19.468 00:09:19.468 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.468 http://cunit.sourceforge.net/ 00:09:19.468 00:09:19.468 00:09:19.468 Suite: blobfs_sync_ut 00:09:19.468 Test: cache_read_after_write ...[2024-07-23 15:03:14.772843] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:09:19.468 passed 00:09:19.468 Test: file_length ...passed 00:09:19.468 Test: append_write_to_extend_blob ...passed 00:09:19.468 Test: partial_buffer ...passed 00:09:19.468 Test: cache_write_null_buffer ...passed 00:09:19.468 Test: fs_create_sync ...passed 00:09:19.468 Test: fs_rename_sync ...passed 00:09:19.468 Test: cache_append_no_cache ...passed 00:09:19.727 Test: fs_delete_file_without_close ...passed 00:09:19.727 00:09:19.727 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.727 suites 1 1 n/a 0 0 00:09:19.727 tests 9 9 9 0 0 00:09:19.727 asserts 345 345 345 0 n/a 00:09:19.727 00:09:19.727 Elapsed time = 0.442 seconds 00:09:19.727 15:03:14 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:09:19.727 00:09:19.727 00:09:19.727 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.727 http://cunit.sourceforge.net/ 00:09:19.727 00:09:19.727 00:09:19.727 Suite: blobfs_bdev_ut 00:09:19.727 Test: spdk_blobfs_bdev_detect_test ...passed 00:09:19.727 Test: spdk_blobfs_bdev_create_test ...[2024-07-23 15:03:14.987670] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:19.727 [2024-07-23 15:03:14.988060] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:19.727 passed 00:09:19.727 Test: spdk_blobfs_bdev_mount_test ...passed 00:09:19.727 00:09:19.727 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.727 suites 1 1 n/a 0 0 00:09:19.727 tests 3 3 3 0 0 00:09:19.727 asserts 9 9 9 0 n/a 00:09:19.727 00:09:19.727 Elapsed time = 0.001 seconds 00:09:19.727 ************************************ 00:09:19.727 END TEST unittest_blob_blobfs 00:09:19.727 ************************************ 00:09:19.727 00:09:19.727 real 0m14.998s 00:09:19.727 user 0m14.253s 00:09:19.727 sys 0m0.970s 00:09:19.727 15:03:15 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.727 15:03:15 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:09:19.727 15:03:15 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:19.727 15:03:15 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:09:19.727 15:03:15 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:19.727 15:03:15 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.727 15:03:15 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:19.727 ************************************ 00:09:19.727 START TEST unittest_event 00:09:19.727 ************************************ 00:09:19.727 15:03:15 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:09:19.727 15:03:15 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:09:19.727 00:09:19.727 00:09:19.727 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.727 http://cunit.sourceforge.net/ 00:09:19.727 00:09:19.727 00:09:19.727 Suite: app_suite 00:09:19.727 Test: test_spdk_app_parse_args ...app_ut [options] 00:09:19.727 00:09:19.727 CPU options: 00:09:19.728 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:19.728 (like [0,1,10]) 00:09:19.728 --lcores lcore to CPU mapping list. The list is in the format: 00:09:19.728 [<,lcores[@CPUs]>...] 00:09:19.728 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:19.728 Within the group, '-' is used for range separator, 00:09:19.728 ',' is used for single number separator. 00:09:19.728 '( )' can be omitted for single element group, 00:09:19.728 '@' can be omitted if cpus and lcores have the same value 00:09:19.728 --disable-cpumask-locks Disable CPU core lock files. 00:09:19.728 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:19.728 pollers in the app support interrupt mode) 00:09:19.728 -p, --main-core main (primary) core for DPDK 00:09:19.728 00:09:19.728 Configuration options: 00:09:19.728 -c, --config, --json JSON config file 00:09:19.728 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:19.728 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:19.728 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:19.728 --rpcs-allowed comma-separated list of permitted RPCS 00:09:19.728 --json-ignore-init-errors don't exit on invalid config entry 00:09:19.728 00:09:19.728 Memory options: 00:09:19.728 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:19.728 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:19.728 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:19.728 -R, --huge-unlink unlink huge files after initialization 00:09:19.728 -n, --mem-channels number of memory channels used for DPDK 00:09:19.728 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:19.728 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:19.728 --no-huge run without using hugepages 00:09:19.728 -i, --shm-id shared memory ID (optional) 00:09:19.728 -g, --single-file-segments force creating just one hugetlbfs file 00:09:19.728 00:09:19.728 PCI options: 00:09:19.728 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:19.728 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:19.728 -u, --no-pci disable PCI access 00:09:19.728 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:19.728 00:09:19.728 Log options: 00:09:19.728 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:19.728 --silence-noticelog disable notice level logging to stderr 00:09:19.728 00:09:19.728 Trace options: 00:09:19.728 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:19.728 setting 0 to disable trace (default 32768) 00:09:19.728 Tracepoints vary in size and can use more than one trace entry. 00:09:19.728 -e, --tpoint-group [:] 00:09:19.728 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:19.728 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:19.728 a tracepoint group. First tpoint inside a group can be enabled by 00:09:19.728 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:19.728 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:19.728 in /include/spdk_internal/trace_defs.h 00:09:19.728 00:09:19.728 Other options: 00:09:19.728 -h, --help show this usage 00:09:19.728 -v, --version print SPDK version 00:09:19.728 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:19.728 --env-context Opaque context for use of the env implementation 00:09:19.728 app_ut [options] 00:09:19.728 00:09:19.728 CPU options: 00:09:19.728 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:19.728 (like [0,1,10]) 00:09:19.728 --lcores lcore to CPU mapping list. The list is in the format: 00:09:19.728 [<,lcores[@CPUs]>...] 00:09:19.728 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:19.728 Within the group, '-' is used for range separator, 00:09:19.728 ',' is used for single number separator. 00:09:19.728 '( )' can be omitted for single element group, 00:09:19.728 '@' can be omitted if cpus and lcores have the same value 00:09:19.728 --disable-cpumask-locks Disable CPU core lock files. 00:09:19.728 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:19.728 pollers in the app support interrupt mode) 00:09:19.728 -p, --main-core main (primary) core for DPDK 00:09:19.728 00:09:19.728 Configuration options: 00:09:19.728 -c, --config, --json JSON config file 00:09:19.728 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:19.728 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:19.728 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:19.728 --rpcs-allowed comma-separated list of permitted RPCS 00:09:19.728 --json-ignore-init-errors don't exit on invalid config entry 00:09:19.728 00:09:19.728 Memory options: 00:09:19.728 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:19.728 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:19.728 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:19.728 -R, --huge-unlink unlink huge files after initialization 00:09:19.728 -n, --mem-channels number of memory channels used for DPDK 00:09:19.728 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:19.728 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:19.728 --no-huge run without using hugepages 00:09:19.728 -i, --shm-id shared memory ID (optional) 00:09:19.728 -g, --single-file-segments force creating just one hugetlbfs file 00:09:19.728 00:09:19.728 PCI options: 00:09:19.728 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:19.728 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:19.728 -u, --no-pci disable PCI access 00:09:19.728 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:19.728 00:09:19.728 Log options: 00:09:19.728 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:19.728 --silence-noticelog disable notice level logging to stderr 00:09:19.728 00:09:19.728 Trace options: 00:09:19.728 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:19.728 setting 0 to disable trace (default 32768) 00:09:19.728 Tracepoints vary in size and can use more than one trace entry. 00:09:19.728 -e, --tpoint-group [:] 00:09:19.728 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:19.728 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:19.728 a tracepoint group. First tpoint inside a group can be enabled by 00:09:19.728 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:19.728 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:19.728 in /include/spdk_internal/trace_defs.h 00:09:19.728 00:09:19.728 Other options: 00:09:19.728 -h, --help show this usage 00:09:19.728 -v, --version print SPDK version 00:09:19.728 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:19.728 --env-context Opaque context for use of the env implementation 00:09:19.728 app_ut: invalid option -- 'z' 00:09:19.728 app_ut: unrecognized option '--test-long-opt' 00:09:19.728 [2024-07-23 15:03:15.079861] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:09:19.728 [2024-07-23 15:03:15.080115] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:09:19.728 app_ut [options] 00:09:19.728 00:09:19.728 CPU options: 00:09:19.728 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:19.728 (like [0,1,10]) 00:09:19.728 --lcores lcore to CPU mapping list. The list is in the format: 00:09:19.728 [<,lcores[@CPUs]>...] 00:09:19.728 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:19.729 Within the group, '-' is used for range separator, 00:09:19.729 ',' is used for single number separator. 00:09:19.729 '( )' can be omitted for single element group, 00:09:19.729 '@' can be omitted if cpus and lcores have the same value 00:09:19.729 --disable-cpumask-locks Disable CPU core lock files. 00:09:19.729 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:19.729 pollers in the app support interrupt mode) 00:09:19.729 -p, --main-core main (primary) core for DPDK 00:09:19.729 00:09:19.729 Configuration options: 00:09:19.729 -c, --config, --json JSON config file 00:09:19.729 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:19.729 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:19.729 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:19.729 --rpcs-allowed comma-separated list of permitted RPCS 00:09:19.729 --json-ignore-init-errors don't exit on invalid config entry 00:09:19.729 00:09:19.729 Memory options: 00:09:19.729 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:19.729 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:19.729 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:19.729 -R, --huge-unlink unlink huge files after initialization 00:09:19.729 -n, --mem-channels number of memory channels used for DPDK 00:09:19.729 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:19.729 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:19.729 --no-huge run without using hugepages 00:09:19.729 -i, --shm-id shared memory ID (optional) 00:09:19.729 -g, --single-file-segments force creating just one hugetlbfs file 00:09:19.729 00:09:19.729 PCI options: 00:09:19.729 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:19.729 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:19.729 -u, --no-pci disable PCI access 00:09:19.729 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:19.729 00:09:19.729 Log options: 00:09:19.729 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:19.729 --silence-noticelog disable notice level logging to stderr 00:09:19.729 00:09:19.729 Trace options: 00:09:19.729 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:19.729 setting 0 to disable trace (default 32768) 00:09:19.729 Tracepoints vary in size and can use more than one trace entry. 00:09:19.729 -e, --tpoint-group [:] 00:09:19.729 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:19.729 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:19.729 a tracepoint group. First tpoint inside a group can be enabled by 00:09:19.729 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:19.729 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:19.729 in /include/spdk_internal/trace_defs.h 00:09:19.729 00:09:19.729 Other options: 00:09:19.729 -h, --help show this usage 00:09:19.729 -v, --version print SPDK version 00:09:19.729 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:19.729 --env-context Opaque context for use of the env implementation 00:09:19.729 passed 00:09:19.729 00:09:19.729 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.729 suites 1 1 n/a 0 0 00:09:19.729 tests 1 1 1 0 0 00:09:19.729 asserts 8 8 8 0 n/a 00:09:19.729 00:09:19.729 Elapsed time = 0.001 seconds 00:09:19.729 [2024-07-23 15:03:15.080291] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:09:19.729 15:03:15 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:09:19.729 00:09:19.729 00:09:19.729 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.729 http://cunit.sourceforge.net/ 00:09:19.729 00:09:19.729 00:09:19.729 Suite: app_suite 00:09:19.729 Test: test_create_reactor ...passed 00:09:19.729 Test: test_init_reactors ...passed 00:09:19.729 Test: test_event_call ...passed 00:09:19.729 Test: test_schedule_thread ...passed 00:09:19.729 Test: test_reschedule_thread ...passed 00:09:19.729 Test: test_bind_thread ...passed 00:09:19.729 Test: test_for_each_reactor ...passed 00:09:19.729 Test: test_reactor_stats ...passed 00:09:19.729 Test: test_scheduler ...passed 00:09:19.729 Test: test_governor ...passed 00:09:19.729 00:09:19.729 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.729 suites 1 1 n/a 0 0 00:09:19.729 tests 10 10 10 0 0 00:09:19.729 asserts 344 344 344 0 n/a 00:09:19.729 00:09:19.729 Elapsed time = 0.027 seconds 00:09:19.987 ************************************ 00:09:19.987 END TEST unittest_event 00:09:19.987 ************************************ 00:09:19.987 00:09:19.987 real 0m0.116s 00:09:19.987 user 0m0.058s 00:09:19.987 sys 0m0.058s 00:09:19.987 15:03:15 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.987 15:03:15 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:09:19.987 15:03:15 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:19.987 15:03:15 unittest -- unit/unittest.sh@235 -- # uname -s 00:09:19.987 15:03:15 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:09:19.987 15:03:15 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:09:19.987 15:03:15 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:19.987 15:03:15 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.987 15:03:15 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:19.987 ************************************ 00:09:19.987 START TEST unittest_ftl 00:09:19.987 ************************************ 00:09:19.987 15:03:15 unittest.unittest_ftl -- common/autotest_common.sh@1123 -- # unittest_ftl 00:09:19.987 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:09:19.987 00:09:19.987 00:09:19.987 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.988 http://cunit.sourceforge.net/ 00:09:19.988 00:09:19.988 00:09:19.988 Suite: ftl_band_suite 00:09:19.988 Test: test_band_block_offset_from_addr_base ...passed 00:09:19.988 Test: test_band_block_offset_from_addr_offset ...passed 00:09:19.988 Test: test_band_addr_from_block_offset ...passed 00:09:19.988 Test: test_band_set_addr ...passed 00:09:20.246 Test: test_invalidate_addr ...passed 00:09:20.246 Test: test_next_xfer_addr ...passed 00:09:20.246 00:09:20.246 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.246 suites 1 1 n/a 0 0 00:09:20.246 tests 6 6 6 0 0 00:09:20.246 asserts 30356 30356 30356 0 n/a 00:09:20.246 00:09:20.246 Elapsed time = 0.196 seconds 00:09:20.246 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:09:20.246 00:09:20.246 00:09:20.246 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.246 http://cunit.sourceforge.net/ 00:09:20.246 00:09:20.246 00:09:20.246 Suite: ftl_bitmap 00:09:20.246 Test: test_ftl_bitmap_create ...[2024-07-23 15:03:15.553939] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:09:20.246 passed 00:09:20.246 Test: test_ftl_bitmap_get ...[2024-07-23 15:03:15.554239] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:09:20.246 passed 00:09:20.246 Test: test_ftl_bitmap_set ...passed 00:09:20.246 Test: test_ftl_bitmap_clear ...passed 00:09:20.246 Test: test_ftl_bitmap_find_first_set ...passed 00:09:20.246 Test: test_ftl_bitmap_find_first_clear ...passed 00:09:20.246 Test: test_ftl_bitmap_count_set ...passed 00:09:20.246 00:09:20.246 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.246 suites 1 1 n/a 0 0 00:09:20.246 tests 7 7 7 0 0 00:09:20.246 asserts 137 137 137 0 n/a 00:09:20.246 00:09:20.246 Elapsed time = 0.001 seconds 00:09:20.246 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:09:20.246 00:09:20.246 00:09:20.246 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.246 http://cunit.sourceforge.net/ 00:09:20.246 00:09:20.246 00:09:20.246 Suite: ftl_io_suite 00:09:20.246 Test: test_completion ...passed 00:09:20.246 Test: test_multiple_ios ...passed 00:09:20.246 00:09:20.246 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.246 suites 1 1 n/a 0 0 00:09:20.246 tests 2 2 2 0 0 00:09:20.246 asserts 47 47 47 0 n/a 00:09:20.246 00:09:20.246 Elapsed time = 0.007 seconds 00:09:20.246 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:09:20.246 00:09:20.246 00:09:20.246 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.246 http://cunit.sourceforge.net/ 00:09:20.246 00:09:20.246 00:09:20.246 Suite: ftl_mngt 00:09:20.246 Test: test_next_step ...passed 00:09:20.246 Test: test_continue_step ...passed 00:09:20.246 Test: test_get_func_and_step_cntx_alloc ...passed 00:09:20.246 Test: test_fail_step ...passed 00:09:20.246 Test: test_mngt_call_and_call_rollback ...passed 00:09:20.246 Test: test_nested_process_failure ...passed 00:09:20.246 Test: test_call_init_success ...passed 00:09:20.246 Test: test_call_init_failure ...passed 00:09:20.246 00:09:20.246 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.246 suites 1 1 n/a 0 0 00:09:20.246 tests 8 8 8 0 0 00:09:20.246 asserts 196 196 196 0 n/a 00:09:20.246 00:09:20.246 Elapsed time = 0.002 seconds 00:09:20.246 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:09:20.505 00:09:20.505 00:09:20.505 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.505 http://cunit.sourceforge.net/ 00:09:20.505 00:09:20.505 00:09:20.505 Suite: ftl_mempool 00:09:20.505 Test: test_ftl_mempool_create ...passed 00:09:20.505 Test: test_ftl_mempool_get_put ...passed 00:09:20.505 00:09:20.505 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.505 suites 1 1 n/a 0 0 00:09:20.505 tests 2 2 2 0 0 00:09:20.505 asserts 36 36 36 0 n/a 00:09:20.505 00:09:20.505 Elapsed time = 0.000 seconds 00:09:20.505 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:09:20.505 00:09:20.505 00:09:20.505 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.505 http://cunit.sourceforge.net/ 00:09:20.505 00:09:20.505 00:09:20.505 Suite: ftl_addr64_suite 00:09:20.505 Test: test_addr_cached ...passed 00:09:20.505 00:09:20.505 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.505 suites 1 1 n/a 0 0 00:09:20.505 tests 1 1 1 0 0 00:09:20.505 asserts 1536 1536 1536 0 n/a 00:09:20.505 00:09:20.505 Elapsed time = 0.000 seconds 00:09:20.505 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:09:20.505 00:09:20.505 00:09:20.505 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.505 http://cunit.sourceforge.net/ 00:09:20.505 00:09:20.505 00:09:20.505 Suite: ftl_sb 00:09:20.505 Test: test_sb_crc_v2 ...passed 00:09:20.505 Test: test_sb_crc_v3 ...passed 00:09:20.505 Test: test_sb_v3_md_layout ...[2024-07-23 15:03:15.767592] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:09:20.505 [2024-07-23 15:03:15.767944] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:20.505 [2024-07-23 15:03:15.768012] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:20.505 [2024-07-23 15:03:15.768049] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:20.505 [2024-07-23 15:03:15.768095] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:20.505 [2024-07-23 15:03:15.768137] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:09:20.505 [2024-07-23 15:03:15.768184] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:20.505 [2024-07-23 15:03:15.768222] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:20.505 [2024-07-23 15:03:15.768313] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:20.505 [2024-07-23 15:03:15.768358] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:20.505 passed 00:09:20.505 Test: test_sb_v5_md_layout ...[2024-07-23 15:03:15.768420] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:20.505 passed 00:09:20.505 00:09:20.505 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.505 suites 1 1 n/a 0 0 00:09:20.505 tests 4 4 4 0 0 00:09:20.505 asserts 160 160 160 0 n/a 00:09:20.505 00:09:20.505 Elapsed time = 0.002 seconds 00:09:20.505 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:09:20.505 00:09:20.505 00:09:20.505 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.505 http://cunit.sourceforge.net/ 00:09:20.505 00:09:20.505 00:09:20.505 Suite: ftl_layout_upgrade 00:09:20.505 Test: test_l2p_upgrade ...passed 00:09:20.505 00:09:20.505 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.505 suites 1 1 n/a 0 0 00:09:20.505 tests 1 1 1 0 0 00:09:20.505 asserts 152 152 152 0 n/a 00:09:20.505 00:09:20.505 Elapsed time = 0.001 seconds 00:09:20.505 15:03:15 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:09:20.505 00:09:20.505 00:09:20.505 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.505 http://cunit.sourceforge.net/ 00:09:20.505 00:09:20.505 00:09:20.505 Suite: ftl_p2l_suite 00:09:20.505 Test: test_p2l_num_pages ...passed 00:09:20.505 Test: test_ckpt_issue ...passed 00:09:20.505 Test: test_persist_band_p2l ...passed 00:09:20.505 Test: test_clean_restore_p2l ...passed 00:09:20.505 Test: test_dirty_restore_p2l ...passed 00:09:20.505 00:09:20.505 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.505 suites 1 1 n/a 0 0 00:09:20.505 tests 5 5 5 0 0 00:09:20.505 asserts 10020 10020 10020 0 n/a 00:09:20.505 00:09:20.505 Elapsed time = 0.079 seconds 00:09:20.505 00:09:20.505 real 0m0.700s 00:09:20.505 user 0m0.327s 00:09:20.505 sys 0m0.373s 00:09:20.764 15:03:15 unittest.unittest_ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.764 15:03:15 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:09:20.764 ************************************ 00:09:20.764 END TEST unittest_ftl 00:09:20.764 ************************************ 00:09:20.764 15:03:15 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:20.764 15:03:15 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:20.764 15:03:15 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.764 15:03:15 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.764 15:03:15 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:20.764 ************************************ 00:09:20.764 START TEST unittest_accel 00:09:20.764 ************************************ 00:09:20.764 15:03:15 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:20.764 00:09:20.764 00:09:20.764 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.764 http://cunit.sourceforge.net/ 00:09:20.764 00:09:20.764 00:09:20.764 Suite: accel_sequence 00:09:20.764 Test: test_sequence_fill_copy ...passed 00:09:20.764 Test: test_sequence_abort ...passed 00:09:20.764 Test: test_sequence_append_error ...passed 00:09:20.764 Test: test_sequence_completion_error ...[2024-07-23 15:03:16.014645] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7e77969277c0 00:09:20.764 [2024-07-23 15:03:16.015045] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7e77969277c0 00:09:20.764 [2024-07-23 15:03:16.015146] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7e77969277c0 00:09:20.764 [2024-07-23 15:03:16.015205] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7e77969277c0 00:09:20.764 passed 00:09:20.764 Test: test_sequence_decompress ...passed 00:09:20.764 Test: test_sequence_reverse ...passed 00:09:20.764 Test: test_sequence_copy_elision ...passed 00:09:20.764 Test: test_sequence_accel_buffers ...passed 00:09:20.764 Test: test_sequence_memory_domain ...[2024-07-23 15:03:16.031668] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1761:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:09:20.764 [2024-07-23 15:03:16.031934] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1800:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:09:20.764 passed 00:09:20.764 Test: test_sequence_module_memory_domain ...passed 00:09:20.764 Test: test_sequence_crypto ...passed 00:09:20.764 Test: test_sequence_driver ...[2024-07-23 15:03:16.041763] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1908:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7e77939047c0 using driver: ut 00:09:20.764 [2024-07-23 15:03:16.041928] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1972:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7e77939047c0 through driver: ut 00:09:20.764 passed 00:09:20.764 Test: test_sequence_same_iovs ...passed 00:09:20.764 Test: test_sequence_crc32 ...passed 00:09:20.764 Suite: accel 00:09:20.764 Test: test_spdk_accel_task_complete ...passed 00:09:20.764 Test: test_get_task ...passed 00:09:20.764 Test: test_spdk_accel_submit_copy ...passed 00:09:20.764 Test: test_spdk_accel_submit_dualcast ...[2024-07-23 15:03:16.049310] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:20.764 [2024-07-23 15:03:16.049388] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:20.764 passed 00:09:20.764 Test: test_spdk_accel_submit_compare ...passed 00:09:20.764 Test: test_spdk_accel_submit_fill ...passed 00:09:20.764 Test: test_spdk_accel_submit_crc32c ...passed 00:09:20.764 Test: test_spdk_accel_submit_crc32cv ...passed 00:09:20.764 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:09:20.764 Test: test_spdk_accel_submit_xor ...passed 00:09:20.764 Test: test_spdk_accel_module_find_by_name ...passed 00:09:20.764 Test: test_spdk_accel_module_register ...passed 00:09:20.764 00:09:20.764 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.764 suites 2 2 n/a 0 0 00:09:20.764 tests 26 26 26 0 0 00:09:20.764 asserts 830 830 830 0 n/a 00:09:20.764 00:09:20.764 Elapsed time = 0.050 seconds 00:09:20.764 00:09:20.764 real 0m0.098s 00:09:20.764 user 0m0.046s 00:09:20.764 sys 0m0.052s 00:09:20.764 15:03:16 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.764 15:03:16 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:09:20.764 ************************************ 00:09:20.764 END TEST unittest_accel 00:09:20.764 ************************************ 00:09:20.764 15:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:20.764 15:03:16 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:20.764 15:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.764 15:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.764 15:03:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 ************************************ 00:09:20.765 START TEST unittest_ioat 00:09:20.765 ************************************ 00:09:20.765 15:03:16 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:20.765 00:09:20.765 00:09:20.765 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.765 http://cunit.sourceforge.net/ 00:09:20.765 00:09:20.765 00:09:20.765 Suite: ioat 00:09:20.765 Test: ioat_state_check ...passed 00:09:20.765 00:09:20.765 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.765 suites 1 1 n/a 0 0 00:09:20.765 tests 1 1 1 0 0 00:09:20.765 asserts 32 32 32 0 n/a 00:09:20.765 00:09:20.765 Elapsed time = 0.000 seconds 00:09:20.765 00:09:20.765 real 0m0.036s 00:09:20.765 user 0m0.015s 00:09:20.765 sys 0m0.021s 00:09:20.765 15:03:16 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.765 15:03:16 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 ************************************ 00:09:20.765 END TEST unittest_ioat 00:09:20.765 ************************************ 00:09:21.024 15:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:21.024 15:03:16 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:21.024 15:03:16 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:21.024 15:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.024 15:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.024 15:03:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.024 ************************************ 00:09:21.024 START TEST unittest_idxd_user 00:09:21.024 ************************************ 00:09:21.024 15:03:16 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:21.024 00:09:21.024 00:09:21.024 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.024 http://cunit.sourceforge.net/ 00:09:21.024 00:09:21.024 00:09:21.024 Suite: idxd_user 00:09:21.024 Test: test_idxd_wait_cmd ...[2024-07-23 15:03:16.227639] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:21.024 [2024-07-23 15:03:16.228063] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:09:21.024 passed 00:09:21.024 Test: test_idxd_reset_dev ...passed 00:09:21.024 Test: test_idxd_group_config ...passed 00:09:21.024 Test: test_idxd_wq_config ...passed 00:09:21.024 00:09:21.024 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.024 suites 1 1 n/a 0 0 00:09:21.024 tests 4 4 4 0 0 00:09:21.024 asserts 20 20 20 0 n/a 00:09:21.024 00:09:21.024 Elapsed time = 0.001 seconds 00:09:21.024 [2024-07-23 15:03:16.228442] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:21.024 [2024-07-23 15:03:16.228491] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:09:21.024 00:09:21.024 real 0m0.034s 00:09:21.024 user 0m0.014s 00:09:21.024 sys 0m0.021s 00:09:21.024 15:03:16 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.024 15:03:16 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:09:21.024 ************************************ 00:09:21.024 END TEST unittest_idxd_user 00:09:21.024 ************************************ 00:09:21.024 15:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:21.024 15:03:16 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:09:21.024 15:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.024 15:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.024 15:03:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.024 ************************************ 00:09:21.024 START TEST unittest_iscsi 00:09:21.024 ************************************ 00:09:21.024 15:03:16 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:09:21.024 15:03:16 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:09:21.024 00:09:21.024 00:09:21.024 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.024 http://cunit.sourceforge.net/ 00:09:21.024 00:09:21.024 00:09:21.024 Suite: conn_suite 00:09:21.024 Test: read_task_split_in_order_case ...passed 00:09:21.024 Test: read_task_split_reverse_order_case ...passed 00:09:21.024 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:09:21.024 Test: process_non_read_task_completion_test ...passed 00:09:21.024 Test: free_tasks_on_connection ...passed 00:09:21.024 Test: free_tasks_with_queued_datain ...passed 00:09:21.024 Test: abort_queued_datain_task_test ...passed 00:09:21.024 Test: abort_queued_datain_tasks_test ...passed 00:09:21.024 00:09:21.024 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.024 suites 1 1 n/a 0 0 00:09:21.024 tests 8 8 8 0 0 00:09:21.024 asserts 230 230 230 0 n/a 00:09:21.024 00:09:21.024 Elapsed time = 0.001 seconds 00:09:21.024 15:03:16 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:09:21.024 00:09:21.024 00:09:21.024 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.024 http://cunit.sourceforge.net/ 00:09:21.024 00:09:21.024 00:09:21.024 Suite: iscsi_suite 00:09:21.024 Test: param_negotiation_test ...passed 00:09:21.024 Test: list_negotiation_test ...passed 00:09:21.024 Test: parse_valid_test ...passed 00:09:21.024 Test: parse_invalid_test ...[2024-07-23 15:03:16.370359] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:21.024 [2024-07-23 15:03:16.370748] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:21.024 [2024-07-23 15:03:16.370826] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:09:21.024 [2024-07-23 15:03:16.370907] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:09:21.024 [2024-07-23 15:03:16.371079] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:09:21.024 [2024-07-23 15:03:16.371143] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:09:21.024 passed 00:09:21.024 00:09:21.024 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.024 suites 1 1 n/a 0 0 00:09:21.024 tests 4 4 4 0 0 00:09:21.024 asserts 161 161 161 0 n/a 00:09:21.024 00:09:21.024 Elapsed time = 0.006 seconds 00:09:21.024 [2024-07-23 15:03:16.371278] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:09:21.024 15:03:16 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:09:21.024 00:09:21.024 00:09:21.024 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.024 http://cunit.sourceforge.net/ 00:09:21.024 00:09:21.024 00:09:21.024 Suite: iscsi_target_node_suite 00:09:21.024 Test: add_lun_test_cases ...[2024-07-23 15:03:16.405976] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:09:21.024 [2024-07-23 15:03:16.406262] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:09:21.024 [2024-07-23 15:03:16.406348] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:21.024 [2024-07-23 15:03:16.406387] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:21.025 [2024-07-23 15:03:16.406428] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:09:21.025 passed 00:09:21.025 Test: allow_any_allowed ...passed 00:09:21.025 Test: allow_ipv6_allowed ...passed 00:09:21.025 Test: allow_ipv6_denied ...passed 00:09:21.025 Test: allow_ipv6_invalid ...passed 00:09:21.025 Test: allow_ipv4_allowed ...passed 00:09:21.025 Test: allow_ipv4_denied ...passed 00:09:21.025 Test: allow_ipv4_invalid ...passed 00:09:21.025 Test: node_access_allowed ...passed 00:09:21.025 Test: node_access_denied_by_empty_netmask ...passed 00:09:21.025 Test: node_access_multi_initiator_groups_cases ...passed 00:09:21.025 Test: allow_iscsi_name_multi_maps_case ...passed 00:09:21.025 Test: chap_param_test_cases ...[2024-07-23 15:03:16.407244] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:09:21.025 [2024-07-23 15:03:16.407295] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:09:21.025 [2024-07-23 15:03:16.407334] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:09:21.025 passed 00:09:21.025 00:09:21.025 [2024-07-23 15:03:16.407368] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:09:21.025 [2024-07-23 15:03:16.407405] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:09:21.025 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.025 suites 1 1 n/a 0 0 00:09:21.025 tests 13 13 13 0 0 00:09:21.025 asserts 50 50 50 0 n/a 00:09:21.025 00:09:21.025 Elapsed time = 0.002 seconds 00:09:21.025 15:03:16 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:09:21.284 00:09:21.284 00:09:21.284 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.284 http://cunit.sourceforge.net/ 00:09:21.284 00:09:21.284 00:09:21.284 Suite: iscsi_suite 00:09:21.284 Test: op_login_check_target_test ...[2024-07-23 15:03:16.453861] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:09:21.284 passed 00:09:21.284 Test: op_login_session_normal_test ...[2024-07-23 15:03:16.454269] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:21.284 [2024-07-23 15:03:16.454336] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:21.284 [2024-07-23 15:03:16.454376] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:21.284 [2024-07-23 15:03:16.454456] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:09:21.284 [2024-07-23 15:03:16.454524] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:21.284 passed 00:09:21.284 Test: maxburstlength_test ...[2024-07-23 15:03:16.454619] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:09:21.284 [2024-07-23 15:03:16.454655] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:21.284 [2024-07-23 15:03:16.455023] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:21.284 [2024-07-23 15:03:16.455089] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:09:21.284 passed 00:09:21.284 Test: underflow_for_read_transfer_test ...passed 00:09:21.284 Test: underflow_for_zero_read_transfer_test ...passed 00:09:21.284 Test: underflow_for_request_sense_test ...passed 00:09:21.284 Test: underflow_for_check_condition_test ...passed 00:09:21.284 Test: add_transfer_task_test ...passed 00:09:21.284 Test: get_transfer_task_test ...passed 00:09:21.284 Test: del_transfer_task_test ...passed 00:09:21.284 Test: clear_all_transfer_tasks_test ...passed 00:09:21.284 Test: build_iovs_test ...passed 00:09:21.284 Test: build_iovs_with_md_test ...passed 00:09:21.284 Test: pdu_hdr_op_login_test ...[2024-07-23 15:03:16.457278] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:09:21.284 [2024-07-23 15:03:16.457407] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:09:21.284 [2024-07-23 15:03:16.457520] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:09:21.284 passed 00:09:21.284 Test: pdu_hdr_op_text_test ...[2024-07-23 15:03:16.457691] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:21.284 [2024-07-23 15:03:16.457815] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:09:21.284 [2024-07-23 15:03:16.457878] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:09:21.284 passed 00:09:21.284 Test: pdu_hdr_op_logout_test ...[2024-07-23 15:03:16.457983] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:09:21.284 passed 00:09:21.284 Test: pdu_hdr_op_scsi_test ...[2024-07-23 15:03:16.458141] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:21.284 [2024-07-23 15:03:16.458209] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:21.284 [2024-07-23 15:03:16.458258] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:09:21.284 [2024-07-23 15:03:16.458375] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:21.284 [2024-07-23 15:03:16.458484] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:09:21.284 passed 00:09:21.284 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-23 15:03:16.458720] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:21.284 [2024-07-23 15:03:16.458894] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:09:21.284 [2024-07-23 15:03:16.458981] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:09:21.284 passed 00:09:21.284 Test: pdu_hdr_op_nopout_test ...[2024-07-23 15:03:16.459234] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:09:21.284 [2024-07-23 15:03:16.459304] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:21.284 [2024-07-23 15:03:16.459368] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:21.284 [2024-07-23 15:03:16.459410] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:09:21.284 passed 00:09:21.284 Test: pdu_hdr_op_data_test ...[2024-07-23 15:03:16.459501] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:09:21.284 [2024-07-23 15:03:16.459605] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:21.284 [2024-07-23 15:03:16.459678] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:21.284 [2024-07-23 15:03:16.459715] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:09:21.284 [2024-07-23 15:03:16.459834] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:09:21.284 [2024-07-23 15:03:16.459934] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:09:21.284 [2024-07-23 15:03:16.460000] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:09:21.284 passed 00:09:21.284 Test: empty_text_with_cbit_test ...passed 00:09:21.284 Test: pdu_payload_read_test ...[2024-07-23 15:03:16.462468] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:09:21.284 passed 00:09:21.284 Test: data_out_pdu_sequence_test ...passed 00:09:21.284 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:09:21.284 00:09:21.284 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.284 suites 1 1 n/a 0 0 00:09:21.284 tests 24 24 24 0 0 00:09:21.284 asserts 150253 150253 150253 0 n/a 00:09:21.284 00:09:21.284 Elapsed time = 0.019 seconds 00:09:21.284 15:03:16 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:09:21.284 00:09:21.284 00:09:21.284 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.284 http://cunit.sourceforge.net/ 00:09:21.284 00:09:21.284 00:09:21.284 Suite: init_grp_suite 00:09:21.284 Test: create_initiator_group_success_case ...passed 00:09:21.284 Test: find_initiator_group_success_case ...passed 00:09:21.284 Test: register_initiator_group_twice_case ...passed 00:09:21.284 Test: add_initiator_name_success_case ...passed 00:09:21.284 Test: add_initiator_name_fail_case ...passed 00:09:21.284 Test: delete_all_initiator_names_success_case ...[2024-07-23 15:03:16.514994] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:09:21.284 passed 00:09:21.284 Test: add_netmask_success_case ...passed 00:09:21.284 Test: add_netmask_fail_case ...[2024-07-23 15:03:16.515456] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:09:21.284 passed 00:09:21.285 Test: delete_all_netmasks_success_case ...passed 00:09:21.285 Test: initiator_name_overwrite_all_to_any_case ...passed 00:09:21.285 Test: netmask_overwrite_all_to_any_case ...passed 00:09:21.285 Test: add_delete_initiator_names_case ...passed 00:09:21.285 Test: add_duplicated_initiator_names_case ...passed 00:09:21.285 Test: delete_nonexisting_initiator_names_case ...passed 00:09:21.285 Test: add_delete_netmasks_case ...passed 00:09:21.285 Test: add_duplicated_netmasks_case ...passed 00:09:21.285 Test: delete_nonexisting_netmasks_case ...passed 00:09:21.285 00:09:21.285 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.285 suites 1 1 n/a 0 0 00:09:21.285 tests 17 17 17 0 0 00:09:21.285 asserts 108 108 108 0 n/a 00:09:21.285 00:09:21.285 Elapsed time = 0.002 seconds 00:09:21.285 15:03:16 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:09:21.285 00:09:21.285 00:09:21.285 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.285 http://cunit.sourceforge.net/ 00:09:21.285 00:09:21.285 00:09:21.285 Suite: portal_grp_suite 00:09:21.285 Test: portal_create_ipv4_normal_case ...passed 00:09:21.285 Test: portal_create_ipv6_normal_case ...passed 00:09:21.285 Test: portal_create_ipv4_wildcard_case ...passed 00:09:21.285 Test: portal_create_ipv6_wildcard_case ...passed 00:09:21.285 Test: portal_create_twice_case ...[2024-07-23 15:03:16.558731] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:09:21.285 passed 00:09:21.285 Test: portal_grp_register_unregister_case ...passed 00:09:21.285 Test: portal_grp_register_twice_case ...passed 00:09:21.285 Test: portal_grp_add_delete_case ...passed 00:09:21.285 Test: portal_grp_add_delete_twice_case ...passed 00:09:21.285 00:09:21.285 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.285 suites 1 1 n/a 0 0 00:09:21.285 tests 9 9 9 0 0 00:09:21.285 asserts 44 44 44 0 n/a 00:09:21.285 00:09:21.285 Elapsed time = 0.004 seconds 00:09:21.285 00:09:21.285 real 0m0.284s 00:09:21.285 user 0m0.153s 00:09:21.285 sys 0m0.135s 00:09:21.285 15:03:16 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.285 15:03:16 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:09:21.285 ************************************ 00:09:21.285 END TEST unittest_iscsi 00:09:21.285 ************************************ 00:09:21.285 15:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:21.285 15:03:16 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:09:21.285 15:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.285 15:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.285 15:03:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.285 ************************************ 00:09:21.285 START TEST unittest_json 00:09:21.285 ************************************ 00:09:21.285 15:03:16 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:09:21.285 15:03:16 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:09:21.285 00:09:21.285 00:09:21.285 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.285 http://cunit.sourceforge.net/ 00:09:21.285 00:09:21.285 00:09:21.285 Suite: json 00:09:21.285 Test: test_parse_literal ...passed 00:09:21.285 Test: test_parse_string_simple ...passed 00:09:21.285 Test: test_parse_string_control_chars ...passed 00:09:21.285 Test: test_parse_string_utf8 ...passed 00:09:21.285 Test: test_parse_string_escapes_twochar ...passed 00:09:21.285 Test: test_parse_string_escapes_unicode ...passed 00:09:21.285 Test: test_parse_number ...passed 00:09:21.285 Test: test_parse_array ...passed 00:09:21.285 Test: test_parse_object ...passed 00:09:21.285 Test: test_parse_nesting ...passed 00:09:21.285 Test: test_parse_comment ...passed 00:09:21.285 00:09:21.285 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.285 suites 1 1 n/a 0 0 00:09:21.285 tests 11 11 11 0 0 00:09:21.285 asserts 1516 1516 1516 0 n/a 00:09:21.285 00:09:21.285 Elapsed time = 0.002 seconds 00:09:21.285 15:03:16 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:09:21.285 00:09:21.285 00:09:21.285 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.285 http://cunit.sourceforge.net/ 00:09:21.285 00:09:21.285 00:09:21.285 Suite: json 00:09:21.285 Test: test_strequal ...passed 00:09:21.285 Test: test_num_to_uint16 ...passed 00:09:21.285 Test: test_num_to_int32 ...passed 00:09:21.285 Test: test_num_to_uint64 ...passed 00:09:21.285 Test: test_decode_object ...passed 00:09:21.285 Test: test_decode_array ...passed 00:09:21.285 Test: test_decode_bool ...passed 00:09:21.285 Test: test_decode_uint16 ...passed 00:09:21.285 Test: test_decode_int32 ...passed 00:09:21.285 Test: test_decode_uint32 ...passed 00:09:21.285 Test: test_decode_uint64 ...passed 00:09:21.285 Test: test_decode_string ...passed 00:09:21.285 Test: test_decode_uuid ...passed 00:09:21.285 Test: test_find ...passed 00:09:21.285 Test: test_find_array ...passed 00:09:21.285 Test: test_iterating ...passed 00:09:21.285 Test: test_free_object ...passed 00:09:21.285 00:09:21.285 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.285 suites 1 1 n/a 0 0 00:09:21.285 tests 17 17 17 0 0 00:09:21.285 asserts 236 236 236 0 n/a 00:09:21.285 00:09:21.285 Elapsed time = 0.002 seconds 00:09:21.543 15:03:16 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:09:21.543 00:09:21.543 00:09:21.543 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.543 http://cunit.sourceforge.net/ 00:09:21.543 00:09:21.543 00:09:21.543 Suite: json 00:09:21.543 Test: test_write_literal ...passed 00:09:21.543 Test: test_write_string_simple ...passed 00:09:21.543 Test: test_write_string_escapes ...passed 00:09:21.543 Test: test_write_string_utf16le ...passed 00:09:21.543 Test: test_write_number_int32 ...passed 00:09:21.543 Test: test_write_number_uint32 ...passed 00:09:21.543 Test: test_write_number_uint128 ...passed 00:09:21.543 Test: test_write_string_number_uint128 ...passed 00:09:21.543 Test: test_write_number_int64 ...passed 00:09:21.543 Test: test_write_number_uint64 ...passed 00:09:21.543 Test: test_write_number_double ...passed 00:09:21.543 Test: test_write_uuid ...passed 00:09:21.543 Test: test_write_array ...passed 00:09:21.543 Test: test_write_object ...passed 00:09:21.543 Test: test_write_nesting ...passed 00:09:21.543 Test: test_write_val ...passed 00:09:21.543 00:09:21.543 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.543 suites 1 1 n/a 0 0 00:09:21.543 tests 16 16 16 0 0 00:09:21.543 asserts 918 918 918 0 n/a 00:09:21.543 00:09:21.543 Elapsed time = 0.006 seconds 00:09:21.543 15:03:16 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:09:21.543 00:09:21.543 00:09:21.543 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.543 http://cunit.sourceforge.net/ 00:09:21.543 00:09:21.543 00:09:21.543 Suite: jsonrpc 00:09:21.543 Test: test_parse_request ...passed 00:09:21.543 Test: test_parse_request_streaming ...passed 00:09:21.543 00:09:21.543 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.543 suites 1 1 n/a 0 0 00:09:21.543 tests 2 2 2 0 0 00:09:21.543 asserts 289 289 289 0 n/a 00:09:21.543 00:09:21.543 Elapsed time = 0.005 seconds 00:09:21.543 00:09:21.543 real 0m0.163s 00:09:21.543 user 0m0.080s 00:09:21.543 sys 0m0.085s 00:09:21.543 15:03:16 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.543 15:03:16 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:09:21.543 ************************************ 00:09:21.543 END TEST unittest_json 00:09:21.543 ************************************ 00:09:21.543 15:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:21.543 15:03:16 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:09:21.543 15:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.543 15:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.543 15:03:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.543 ************************************ 00:09:21.543 START TEST unittest_rpc 00:09:21.543 ************************************ 00:09:21.543 15:03:16 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:09:21.543 15:03:16 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:09:21.543 00:09:21.543 00:09:21.543 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.543 http://cunit.sourceforge.net/ 00:09:21.543 00:09:21.543 00:09:21.543 Suite: rpc 00:09:21.543 Test: test_jsonrpc_handler ...passed 00:09:21.543 Test: test_spdk_rpc_is_method_allowed ...passed 00:09:21.543 Test: test_rpc_get_methods ...[2024-07-23 15:03:16.865351] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:09:21.543 passed 00:09:21.543 Test: test_rpc_spdk_get_version ...passed 00:09:21.543 Test: test_spdk_rpc_listen_close ...passed 00:09:21.543 Test: test_rpc_run_multiple_servers ...passed 00:09:21.543 00:09:21.543 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.543 suites 1 1 n/a 0 0 00:09:21.543 tests 6 6 6 0 0 00:09:21.543 asserts 23 23 23 0 n/a 00:09:21.543 00:09:21.543 Elapsed time = 0.001 seconds 00:09:21.543 00:09:21.543 real 0m0.036s 00:09:21.543 user 0m0.016s 00:09:21.543 sys 0m0.020s 00:09:21.543 15:03:16 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.543 15:03:16 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.543 ************************************ 00:09:21.543 END TEST unittest_rpc 00:09:21.543 ************************************ 00:09:21.543 15:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:21.543 15:03:16 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:21.543 15:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.543 15:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.543 15:03:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.543 ************************************ 00:09:21.543 START TEST unittest_notify 00:09:21.543 ************************************ 00:09:21.543 15:03:16 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:21.543 00:09:21.543 00:09:21.543 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.543 http://cunit.sourceforge.net/ 00:09:21.543 00:09:21.543 00:09:21.543 Suite: app_suite 00:09:21.543 Test: notify ...passed 00:09:21.543 00:09:21.543 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.543 suites 1 1 n/a 0 0 00:09:21.543 tests 1 1 1 0 0 00:09:21.543 asserts 13 13 13 0 n/a 00:09:21.543 00:09:21.543 Elapsed time = 0.000 seconds 00:09:21.543 00:09:21.543 real 0m0.031s 00:09:21.543 user 0m0.014s 00:09:21.543 sys 0m0.018s 00:09:21.543 15:03:16 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.543 15:03:16 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:09:21.543 ************************************ 00:09:21.543 END TEST unittest_notify 00:09:21.543 ************************************ 00:09:21.802 15:03:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:21.802 15:03:17 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:09:21.802 15:03:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.802 15:03:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.802 15:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.802 ************************************ 00:09:21.802 START TEST unittest_nvme 00:09:21.802 ************************************ 00:09:21.802 15:03:17 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:09:21.802 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:09:21.802 00:09:21.802 00:09:21.802 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.802 http://cunit.sourceforge.net/ 00:09:21.802 00:09:21.802 00:09:21.802 Suite: nvme 00:09:21.802 Test: test_opc_data_transfer ...passed 00:09:21.802 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:09:21.802 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:09:21.802 Test: test_trid_parse_and_compare ...[2024-07-23 15:03:17.030724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:09:21.802 [2024-07-23 15:03:17.030956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:21.802 [2024-07-23 15:03:17.030996] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:09:21.802 [2024-07-23 15:03:17.031025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:21.802 [2024-07-23 15:03:17.031058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:09:21.802 [2024-07-23 15:03:17.031086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:21.802 passed 00:09:21.802 Test: test_trid_trtype_str ...passed 00:09:21.802 Test: test_trid_adrfam_str ...passed 00:09:21.802 Test: test_nvme_ctrlr_probe ...passed 00:09:21.802 Test: test_spdk_nvme_probe ...[2024-07-23 15:03:17.031391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:21.802 [2024-07-23 15:03:17.031473] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:21.803 [2024-07-23 15:03:17.031505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:21.803 [2024-07-23 15:03:17.031611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:09:21.803 passed 00:09:21.803 Test: test_spdk_nvme_connect ...[2024-07-23 15:03:17.031648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:21.803 [2024-07-23 15:03:17.031741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:09:21.803 passed 00:09:21.803 Test: test_nvme_ctrlr_probe_internal ...[2024-07-23 15:03:17.032155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:21.803 [2024-07-23 15:03:17.032317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:21.803 [2024-07-23 15:03:17.032345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:09:21.803 passed 00:09:21.803 Test: test_nvme_init_controllers ...passed 00:09:21.803 Test: test_nvme_driver_init ...[2024-07-23 15:03:17.032438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:09:21.803 [2024-07-23 15:03:17.032518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:09:21.803 [2024-07-23 15:03:17.032558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:21.803 [2024-07-23 15:03:17.141284] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:09:21.803 passed 00:09:21.803 Test: test_spdk_nvme_detach ...[2024-07-23 15:03:17.141531] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:09:21.803 passed 00:09:21.803 Test: test_nvme_completion_poll_cb ...passed 00:09:21.803 Test: test_nvme_user_copy_cmd_complete ...passed 00:09:21.803 Test: test_nvme_allocate_request_null ...passed 00:09:21.803 Test: test_nvme_allocate_request ...passed 00:09:21.803 Test: test_nvme_free_request ...passed 00:09:21.803 Test: test_nvme_allocate_request_user_copy ...passed 00:09:21.803 Test: test_nvme_robust_mutex_init_shared ...passed 00:09:21.803 Test: test_nvme_request_check_timeout ...passed 00:09:21.803 Test: test_nvme_wait_for_completion ...passed 00:09:21.803 Test: test_spdk_nvme_parse_func ...passed 00:09:21.803 Test: test_spdk_nvme_detach_async ...passed 00:09:21.803 Test: test_nvme_parse_addr ...[2024-07-23 15:03:17.143134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1635:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:09:21.803 passed 00:09:21.803 00:09:21.803 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.803 suites 1 1 n/a 0 0 00:09:21.803 tests 25 25 25 0 0 00:09:21.803 asserts 326 326 326 0 n/a 00:09:21.803 00:09:21.803 Elapsed time = 0.006 seconds 00:09:21.803 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:09:21.803 00:09:21.803 00:09:21.803 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.803 http://cunit.sourceforge.net/ 00:09:21.803 00:09:21.803 00:09:21.803 Suite: nvme_ctrlr 00:09:21.803 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-23 15:03:17.180925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 passed 00:09:21.803 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-23 15:03:17.182677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 passed 00:09:21.803 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-23 15:03:17.183922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 passed 00:09:21.803 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-23 15:03:17.185111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 passed 00:09:21.803 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-23 15:03:17.186312] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 [2024-07-23 15:03:17.187427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-23 15:03:17.188546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-23 15:03:17.189658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:21.803 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-23 15:03:17.191976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 [2024-07-23 15:03:17.194179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-23 15:03:17.195299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:21.803 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-23 15:03:17.197663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 [2024-07-23 15:03:17.198834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-23 15:03:17.201073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:21.803 Test: test_nvme_ctrlr_init_delay ...[2024-07-23 15:03:17.203522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 passed 00:09:21.803 Test: test_alloc_io_qpair_rr_1 ...[2024-07-23 15:03:17.204829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 [2024-07-23 15:03:17.205053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:21.803 [2024-07-23 15:03:17.205152] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:21.803 passed 00:09:21.803 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:09:21.803 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:09:21.803 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-23 15:03:17.205205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:21.803 [2024-07-23 15:03:17.205240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:21.803 [2024-07-23 15:03:17.205371] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 passed 00:09:21.803 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-23 15:03:17.205570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 [2024-07-23 15:03:17.205691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:21.803 passed 00:09:21.803 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-23 15:03:17.205938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:09:21.803 [2024-07-23 15:03:17.206019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:21.803 passed 00:09:21.803 Test: test_nvme_ctrlr_fail ...[2024-07-23 15:03:17.206101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:09:21.803 [2024-07-23 15:03:17.206156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:21.803 [2024-07-23 15:03:17.206223] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:09:21.803 passed 00:09:21.803 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:09:21.803 Test: test_nvme_ctrlr_set_supported_features ...passed 00:09:21.803 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-23 15:03:17.206321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:21.803 passed 00:09:21.803 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:09:21.803 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-23 15:03:17.207646] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.062 passed 00:09:22.062 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:09:22.062 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:09:22.062 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:09:22.062 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-23 15:03:17.477881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.062 passed 00:09:22.062 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-23 15:03:17.484753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.062 passed 00:09:22.062 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-23 15:03:17.485987] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.062 [2024-07-23 15:03:17.486045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3002:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:09:22.062 passed 00:09:22.062 Test: test_alloc_io_qpair_fail ...[2024-07-23 15:03:17.487180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.062 passed 00:09:22.062 Test: test_nvme_ctrlr_add_remove_process ...[2024-07-23 15:03:17.487251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:09:22.062 passed 00:09:22.062 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:09:22.062 Test: test_nvme_ctrlr_set_state ...passed 00:09:22.062 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-23 15:03:17.487448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1546:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:09:22.062 [2024-07-23 15:03:17.487515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-23 15:03:17.509366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-23 15:03:17.551195] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_reset ...[2024-07-23 15:03:17.552723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_aer_callback ...[2024-07-23 15:03:17.553053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-23 15:03:17.554411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:09:22.320 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:09:22.320 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-23 15:03:17.556167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:09:22.320 Test: test_nvme_ctrlr_ana_resize ...[2024-07-23 15:03:17.557524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:09:22.320 Test: test_nvme_transport_ctrlr_ready ...[2024-07-23 15:03:17.559055] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:09:22.320 passed 00:09:22.320 Test: test_nvme_ctrlr_disable ...[2024-07-23 15:03:17.559094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4204:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:09:22.320 [2024-07-23 15:03:17.559136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:22.320 passed 00:09:22.320 00:09:22.320 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.320 suites 1 1 n/a 0 0 00:09:22.320 tests 44 44 44 0 0 00:09:22.320 asserts 10434 10434 10434 0 n/a 00:09:22.320 00:09:22.320 Elapsed time = 0.338 seconds 00:09:22.321 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:09:22.321 00:09:22.321 00:09:22.321 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.321 http://cunit.sourceforge.net/ 00:09:22.321 00:09:22.321 00:09:22.321 Suite: nvme_ctrlr_cmd 00:09:22.321 Test: test_get_log_pages ...passed 00:09:22.321 Test: test_set_feature_cmd ...passed 00:09:22.321 Test: test_set_feature_ns_cmd ...passed 00:09:22.321 Test: test_get_feature_cmd ...passed 00:09:22.321 Test: test_get_feature_ns_cmd ...passed 00:09:22.321 Test: test_abort_cmd ...passed 00:09:22.321 Test: test_set_host_id_cmds ...[2024-07-23 15:03:17.608645] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:09:22.321 passed 00:09:22.321 Test: test_io_cmd_raw_no_payload_build ...passed 00:09:22.321 Test: test_io_raw_cmd ...passed 00:09:22.321 Test: test_io_raw_cmd_with_md ...passed 00:09:22.321 Test: test_namespace_attach ...passed 00:09:22.321 Test: test_namespace_detach ...passed 00:09:22.321 Test: test_namespace_create ...passed 00:09:22.321 Test: test_namespace_delete ...passed 00:09:22.321 Test: test_doorbell_buffer_config ...passed 00:09:22.321 Test: test_format_nvme ...passed 00:09:22.321 Test: test_fw_commit ...passed 00:09:22.321 Test: test_fw_image_download ...passed 00:09:22.321 Test: test_sanitize ...passed 00:09:22.321 Test: test_directive ...passed 00:09:22.321 Test: test_nvme_request_add_abort ...passed 00:09:22.321 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:09:22.321 Test: test_nvme_ctrlr_cmd_identify ...passed 00:09:22.321 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:09:22.321 00:09:22.321 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.321 suites 1 1 n/a 0 0 00:09:22.321 tests 24 24 24 0 0 00:09:22.321 asserts 198 198 198 0 n/a 00:09:22.321 00:09:22.321 Elapsed time = 0.001 seconds 00:09:22.321 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:09:22.321 00:09:22.321 00:09:22.321 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.321 http://cunit.sourceforge.net/ 00:09:22.321 00:09:22.321 00:09:22.321 Suite: nvme_ctrlr_cmd 00:09:22.321 Test: test_geometry_cmd ...passed 00:09:22.321 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:09:22.321 00:09:22.321 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.321 suites 1 1 n/a 0 0 00:09:22.321 tests 2 2 2 0 0 00:09:22.321 asserts 7 7 7 0 n/a 00:09:22.321 00:09:22.321 Elapsed time = 0.000 seconds 00:09:22.321 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:09:22.321 00:09:22.321 00:09:22.321 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.321 http://cunit.sourceforge.net/ 00:09:22.321 00:09:22.321 00:09:22.321 Suite: nvme 00:09:22.321 Test: test_nvme_ns_construct ...passed 00:09:22.321 Test: test_nvme_ns_uuid ...passed 00:09:22.321 Test: test_nvme_ns_csi ...passed 00:09:22.321 Test: test_nvme_ns_data ...passed 00:09:22.321 Test: test_nvme_ns_set_identify_data ...passed 00:09:22.321 Test: test_spdk_nvme_ns_get_values ...passed 00:09:22.321 Test: test_spdk_nvme_ns_is_active ...passed 00:09:22.321 Test: spdk_nvme_ns_supports ...passed 00:09:22.321 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:09:22.321 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:09:22.321 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:09:22.321 Test: test_nvme_ns_find_id_desc ...passed 00:09:22.321 00:09:22.321 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.321 suites 1 1 n/a 0 0 00:09:22.321 tests 12 12 12 0 0 00:09:22.321 asserts 95 95 95 0 n/a 00:09:22.321 00:09:22.321 Elapsed time = 0.001 seconds 00:09:22.321 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:09:22.321 00:09:22.321 00:09:22.321 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.321 http://cunit.sourceforge.net/ 00:09:22.321 00:09:22.321 00:09:22.321 Suite: nvme_ns_cmd 00:09:22.321 Test: split_test ...passed 00:09:22.321 Test: split_test2 ...passed 00:09:22.321 Test: split_test3 ...passed 00:09:22.321 Test: split_test4 ...passed 00:09:22.321 Test: test_nvme_ns_cmd_flush ...passed 00:09:22.321 Test: test_nvme_ns_cmd_dataset_management ...passed 00:09:22.321 Test: test_nvme_ns_cmd_copy ...passed 00:09:22.321 Test: test_io_flags ...[2024-07-23 15:03:17.711732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:09:22.321 passed 00:09:22.321 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:09:22.321 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:09:22.321 Test: test_nvme_ns_cmd_reservation_register ...passed 00:09:22.321 Test: test_nvme_ns_cmd_reservation_release ...passed 00:09:22.321 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:09:22.321 Test: test_nvme_ns_cmd_reservation_report ...passed 00:09:22.321 Test: test_cmd_child_request ...passed 00:09:22.321 Test: test_nvme_ns_cmd_readv ...passed 00:09:22.321 Test: test_nvme_ns_cmd_read_with_md ...passed 00:09:22.321 Test: test_nvme_ns_cmd_writev ...[2024-07-23 15:03:17.713838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: cpassed 00:09:22.321 Test: test_nvme_ns_cmd_write_with_md ...hild_length 200 not even multiple of lba_size 512 00:09:22.321 passed 00:09:22.321 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:09:22.321 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:09:22.321 Test: test_nvme_ns_cmd_comparev ...passed 00:09:22.321 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:09:22.321 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:09:22.321 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:09:22.321 Test: test_nvme_ns_cmd_setup_request ...passed 00:09:22.321 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:09:22.321 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:09:22.321 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-23 15:03:17.717299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:22.321 passed 00:09:22.321 Test: test_nvme_ns_cmd_verify ...passed 00:09:22.321 Test: test_nvme_ns_cmd_io_mgmt_send ...[2024-07-23 15:03:17.717493] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:22.321 passed 00:09:22.321 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:09:22.321 00:09:22.321 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.321 suites 1 1 n/a 0 0 00:09:22.321 tests 32 32 32 0 0 00:09:22.321 asserts 550 550 550 0 n/a 00:09:22.321 00:09:22.321 Elapsed time = 0.008 seconds 00:09:22.321 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:09:22.580 00:09:22.580 00:09:22.580 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.580 http://cunit.sourceforge.net/ 00:09:22.580 00:09:22.580 00:09:22.580 Suite: nvme_ns_cmd 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:09:22.580 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:09:22.580 00:09:22.580 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.580 suites 1 1 n/a 0 0 00:09:22.580 tests 12 12 12 0 0 00:09:22.580 asserts 123 123 123 0 n/a 00:09:22.580 00:09:22.580 Elapsed time = 0.001 seconds 00:09:22.580 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:09:22.580 00:09:22.580 00:09:22.580 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.580 http://cunit.sourceforge.net/ 00:09:22.580 00:09:22.580 00:09:22.580 Suite: nvme_qpair 00:09:22.580 Test: test3 ...passed 00:09:22.580 Test: test_ctrlr_failed ...passed 00:09:22.580 Test: struct_packing ...passed 00:09:22.580 Test: test_nvme_qpair_process_completions ...[2024-07-23 15:03:17.792819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:22.580 [2024-07-23 15:03:17.793296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborpassed 00:09:22.580 Test: test_nvme_completion_is_retry ...passed 00:09:22.580 Test: test_get_status_string ...passed 00:09:22.580 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:09:22.580 Test: test_nvme_qpair_submit_request ...passed 00:09:22.580 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:09:22.580 Test: test_nvme_qpair_manual_complete_request ...passed 00:09:22.580 Test: test_nvme_qpair_init_deinit ...passed 00:09:22.580 Test: test_nvme_get_sgl_print_info ...passed 00:09:22.580 00:09:22.580 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.580 suites 1 1 n/a 0 0 00:09:22.580 tests 12 12 12 0 0 00:09:22.580 asserts 154 154 154 0 n/a 00:09:22.580 00:09:22.580 Elapsed time = 0.002 seconds 00:09:22.580 ting queued i/o 00:09:22.580 [2024-07-23 15:03:17.793462] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:22.580 [2024-07-23 15:03:17.793502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:09:22.580 [2024-07-23 15:03:17.794004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:22.580 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:09:22.580 00:09:22.580 00:09:22.580 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.580 http://cunit.sourceforge.net/ 00:09:22.580 00:09:22.580 00:09:22.580 Suite: nvme_pcie 00:09:22.581 Test: test_prp_list_append ...[2024-07-23 15:03:17.831049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:22.581 [2024-07-23 15:03:17.831379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:09:22.581 [2024-07-23 15:03:17.831438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1225:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:09:22.581 [2024-07-23 15:03:17.831629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:22.581 passed 00:09:22.581 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-23 15:03:17.831732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:22.581 passed 00:09:22.581 Test: test_shadow_doorbell_update ...passed 00:09:22.581 Test: test_build_contig_hw_sgl_request ...passed 00:09:22.581 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:09:22.581 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:09:22.581 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:09:22.581 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:09:22.581 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:09:22.581 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:09:22.581 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:09:22.581 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:09:22.581 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:09:22.581 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:09:22.581 00:09:22.581 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.581 suites 1 1 n/a 0 0 00:09:22.581 tests 14 14 14 0 0 00:09:22.581 asserts 235 235 235 0 n/a 00:09:22.581 00:09:22.581 Elapsed time = 0.002 seconds[2024-07-23 15:03:17.832239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:22.581 [2024-07-23 15:03:17.832443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:09:22.581 [2024-07-23 15:03:17.832536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:09:22.581 [2024-07-23 15:03:17.832616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:09:22.581 [2024-07-23 15:03:17.832686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:09:22.581 00:09:22.581 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:09:22.581 00:09:22.581 00:09:22.581 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.581 http://cunit.sourceforge.net/ 00:09:22.581 00:09:22.581 00:09:22.581 Suite: nvme_ns_cmd 00:09:22.581 Test: nvme_poll_group_create_test ...passed 00:09:22.581 Test: nvme_poll_group_add_remove_test ...passed 00:09:22.581 Test: nvme_poll_group_process_completions ...passed 00:09:22.581 Test: nvme_poll_group_destroy_test ...passed 00:09:22.581 Test: nvme_poll_group_get_free_stats ...passed 00:09:22.581 00:09:22.581 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.581 suites 1 1 n/a 0 0 00:09:22.581 tests 5 5 5 0 0 00:09:22.581 asserts 75 75 75 0 n/a 00:09:22.581 00:09:22.581 Elapsed time = 0.000 seconds 00:09:22.581 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:09:22.581 00:09:22.581 00:09:22.581 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.581 http://cunit.sourceforge.net/ 00:09:22.581 00:09:22.581 00:09:22.581 Suite: nvme_quirks 00:09:22.581 Test: test_nvme_quirks_striping ...passed 00:09:22.581 00:09:22.581 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.581 suites 1 1 n/a 0 0 00:09:22.581 tests 1 1 1 0 0 00:09:22.581 asserts 5 5 5 0 n/a 00:09:22.581 00:09:22.581 Elapsed time = 0.000 seconds 00:09:22.581 15:03:17 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:09:22.581 00:09:22.581 00:09:22.581 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.581 http://cunit.sourceforge.net/ 00:09:22.581 00:09:22.581 00:09:22.581 Suite: nvme_tcp 00:09:22.581 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:09:22.581 Test: test_nvme_tcp_build_iovs ...passed 00:09:22.581 Test: test_nvme_tcp_build_sgl_request ...passed 00:09:22.581 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:09:22.581 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:09:22.581 Test: test_nvme_tcp_req_complete_safe ...passed 00:09:22.581 Test: test_nvme_tcp_req_get ...passed 00:09:22.581 Test: test_nvme_tcp_req_init ...passed 00:09:22.581 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:09:22.581 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:09:22.581 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:09:22.581 Test: test_nvme_tcp_alloc_reqs ...passed 00:09:22.581 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:09:22.581 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-23 15:03:17.940132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x71f2bba0d2e0, and the iovcnt=16, remaining_size=28672 00:09:22.581 [2024-07-23 15:03:17.940908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb609030 is same with the state(6) to be set 00:09:22.581 [2024-07-23 15:03:17.941433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb909070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.941515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x71f2bb80a740 00:09:22.581 [2024-07-23 15:03:17.941555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:09:22.581 [2024-07-23 15:03:17.941601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb80a070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.941643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:09:22.581 [2024-07-23 15:03:17.941684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb80a070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.941724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:09:22.581 [2024-07-23 15:03:17.941774] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb80a070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.941831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb80a070 is same with the state(5) to be set 00:09:22.581 passed 00:09:22.581 Test: test_nvme_tcp_qpair_connect_sock ...passed 00:09:22.581 Test: test_nvme_tcp_qpair_icreq_send ...[2024-07-23 15:03:17.941881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb80a070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.942300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb80a070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.942365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb80a070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.942409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb80a070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.942685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:09:22.581 [2024-07-23 15:03:17.942736] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:22.581 [2024-07-23 15:03:17.943212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:22.581 passed 00:09:22.581 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:09:22.581 Test: test_nvme_tcp_icresp_handle ...[2024-07-23 15:03:17.943345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x71f2bb80b5c0): PDU Sequence Error 00:09:22.581 [2024-07-23 15:03:17.943421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:09:22.581 [2024-07-23 15:03:17.943467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:09:22.581 [2024-07-23 15:03:17.943502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb90b070 is same with the state(5) to be set 00:09:22.581 passed 00:09:22.581 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:09:22.581 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:09:22.581 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:09:22.581 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:09:22.581 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-23 15:03:17.943675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:09:22.581 [2024-07-23 15:03:17.944131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb90b070 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.944181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bb90b070 is same with the state(0) to be set 00:09:22.581 [2024-07-23 15:03:17.944245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x71f2bb80c5c0): PDU Sequence Error 00:09:22.581 [2024-07-23 15:03:17.944355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x71f2bb90d200 00:09:22.581 [2024-07-23 15:03:17.944558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x71f2bba294a0, errno=0, rc=0 00:09:22.581 [2024-07-23 15:03:17.944594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bba294a0 is same with the state(5) to be set 00:09:22.581 [2024-07-23 15:03:17.944630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71f2bba294a0 is same with the state(5) to be set 00:09:22.582 [2024-07-23 15:03:17.944686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71f2bba294a0 (0): Success 00:09:22.582 [2024-07-23 15:03:17.944735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71f2bba294a0 (0): Success 00:09:22.840 [2024-07-23 15:03:18.094728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed passed 00:09:22.840 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:09:22.840 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:09:22.840 Test: test_nvme_tcp_ctrlr_construct ...passed 00:09:22.840 Test: test_nvme_tcp_qpair_submit_request ...to create qpair with size 0. Minimum queue size is 2. 00:09:22.840 [2024-07-23 15:03:18.095255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:22.840 [2024-07-23 15:03:18.095588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:22.840 [2024-07-23 15:03:18.095626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:22.840 [2024-07-23 15:03:18.095857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:22.840 [2024-07-23 15:03:18.095892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:22.840 [2024-07-23 15:03:18.095976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:09:22.840 [2024-07-23 15:03:18.096029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:22.840 [2024-07-23 15:03:18.096133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x515000001980 with addr=192.168.1.78, port=23 00:09:22.840 [2024-07-23 15:03:18.096169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:22.840 passed 00:09:22.840 00:09:22.840 [2024-07-23 15:03:18.096329] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x514000000c40, and the iovcnt=1, remaining_size=1024 00:09:22.840 [2024-07-23 15:03:18.096352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:09:22.840 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.840 suites 1 1 n/a 0 0 00:09:22.840 tests 27 27 27 0 0 00:09:22.840 asserts 624 624 624 0 n/a 00:09:22.840 00:09:22.840 Elapsed time = 0.156 seconds 00:09:22.840 15:03:18 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:09:22.840 00:09:22.840 00:09:22.840 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.840 http://cunit.sourceforge.net/ 00:09:22.840 00:09:22.840 00:09:22.840 Suite: nvme_transport 00:09:22.840 Test: test_nvme_get_transport ...passed 00:09:22.840 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:09:22.840 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:09:22.840 Test: test_nvme_transport_poll_group_add_remove ...passed 00:09:22.840 Test: test_ctrlr_get_memory_domains ...passed 00:09:22.840 00:09:22.840 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.840 suites 1 1 n/a 0 0 00:09:22.840 tests 5 5 5 0 0 00:09:22.840 asserts 28 28 28 0 n/a 00:09:22.840 00:09:22.840 Elapsed time = 0.000 seconds 00:09:22.840 15:03:18 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:09:22.840 00:09:22.840 00:09:22.840 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.840 http://cunit.sourceforge.net/ 00:09:22.840 00:09:22.840 00:09:22.840 Suite: nvme_io_msg 00:09:22.840 Test: test_nvme_io_msg_send ...passed 00:09:22.840 Test: test_nvme_io_msg_process ...passed 00:09:22.840 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:09:22.840 00:09:22.840 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.840 suites 1 1 n/a 0 0 00:09:22.840 tests 3 3 3 0 0 00:09:22.840 asserts 56 56 56 0 n/a 00:09:22.840 00:09:22.840 Elapsed time = 0.000 seconds 00:09:22.840 15:03:18 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:09:22.840 00:09:22.840 00:09:22.840 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.840 http://cunit.sourceforge.net/ 00:09:22.840 00:09:22.840 00:09:22.840 Suite: nvme_pcie_common 00:09:22.840 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:09:22.840 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-07-23 15:03:18.203593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:09:22.840 passed 00:09:22.840 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:09:22.840 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:09:22.840 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-23 15:03:18.204469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 505:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:09:22.840 [2024-07-23 15:03:18.204517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 458:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:09:22.840 [2024-07-23 15:03:18.204557] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 552:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:09:22.840 passed 00:09:22.840 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:09:22.840 00:09:22.840 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.840 suites 1 1 n/a 0 0 00:09:22.840 tests 6 6 6 0 0 00:09:22.840 asserts 148 148 148 0 n/a 00:09:22.840 00:09:22.840 Elapsed time = 0.002 seconds 00:09:22.840 [2024-07-23 15:03:18.205132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:22.840 [2024-07-23 15:03:18.205183] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:22.840 15:03:18 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:09:22.840 00:09:22.840 00:09:22.840 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.840 http://cunit.sourceforge.net/ 00:09:22.840 00:09:22.840 00:09:22.841 Suite: nvme_fabric 00:09:22.841 Test: test_nvme_fabric_prop_set_cmd ...passed 00:09:22.841 Test: test_nvme_fabric_prop_get_cmd ...passed 00:09:22.841 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:09:22.841 Test: test_nvme_fabric_discover_probe ...passed 00:09:22.841 Test: test_nvme_fabric_qpair_connect ...passed 00:09:22.841 00:09:22.841 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.841 suites 1 1 n/a 0 0 00:09:22.841 tests 5 5 5 0 0 00:09:22.841 asserts 60 60 60 0 n/a 00:09:22.841 00:09:22.841 Elapsed time = 0.001 seconds 00:09:22.841 [2024-07-23 15:03:18.242395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:09:22.841 15:03:18 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:09:23.099 00:09:23.099 00:09:23.099 CUnit - A unit testing framework for C - Version 2.1-3 00:09:23.099 http://cunit.sourceforge.net/ 00:09:23.099 00:09:23.099 00:09:23.099 Suite: nvme_opal 00:09:23.099 Test: test_opal_nvme_security_recv_send_done ...passed 00:09:23.099 Test: test_opal_add_short_atom_header ...passed 00:09:23.099 00:09:23.099 Run Summary: Type Total Ran Passed Failed Inactive 00:09:23.099 suites 1 1 n/a 0 0 00:09:23.099 tests 2 2 2 0 0 00:09:23.099 asserts 22 22 22 0 n/a 00:09:23.099 00:09:23.099 Elapsed time = 0.000 seconds[2024-07-23 15:03:18.278756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:09:23.099 00:09:23.099 ************************************ 00:09:23.099 END TEST unittest_nvme 00:09:23.099 ************************************ 00:09:23.099 00:09:23.099 real 0m1.285s 00:09:23.099 user 0m0.587s 00:09:23.099 sys 0m0.554s 00:09:23.099 15:03:18 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.099 15:03:18 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.099 15:03:18 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:23.099 15:03:18 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:23.099 15:03:18 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:23.099 15:03:18 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.100 15:03:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:23.100 ************************************ 00:09:23.100 START TEST unittest_log 00:09:23.100 ************************************ 00:09:23.100 15:03:18 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:23.100 00:09:23.100 00:09:23.100 CUnit - A unit testing framework for C - Version 2.1-3 00:09:23.100 http://cunit.sourceforge.net/ 00:09:23.100 00:09:23.100 00:09:23.100 Suite: log 00:09:23.100 Test: log_test ...[2024-07-23 15:03:18.363061] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:09:23.100 [2024-07-23 15:03:18.363320] log_ut.c: 57:log_test: *DEBUG*: log test 00:09:23.100 log dump test: 00:09:23.100 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:09:23.100 spdk dump test: 00:09:23.100 passed 00:09:23.100 Test: deprecation ...00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:09:23.100 spdk dump test: 00:09:23.100 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:09:23.100 00000010 65 20 63 68 61 72 73 e chars 00:09:24.034 passed 00:09:24.034 00:09:24.034 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.034 suites 1 1 n/a 0 0 00:09:24.034 tests 2 2 2 0 0 00:09:24.034 asserts 73 73 73 0 n/a 00:09:24.034 00:09:24.034 Elapsed time = 0.001 seconds 00:09:24.034 ************************************ 00:09:24.034 END TEST unittest_log 00:09:24.034 ************************************ 00:09:24.034 00:09:24.034 real 0m1.038s 00:09:24.034 user 0m0.014s 00:09:24.034 sys 0m0.024s 00:09:24.034 15:03:19 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.034 15:03:19 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:09:24.034 15:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:24.034 15:03:19 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:24.034 15:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:24.034 15:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.034 15:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.034 ************************************ 00:09:24.034 START TEST unittest_lvol 00:09:24.034 ************************************ 00:09:24.034 15:03:19 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:24.034 00:09:24.034 00:09:24.034 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.034 http://cunit.sourceforge.net/ 00:09:24.034 00:09:24.034 00:09:24.034 Suite: lvol 00:09:24.034 Test: lvs_init_unload_success ...[2024-07-23 15:03:19.458816] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:09:24.034 passed 00:09:24.034 Test: lvs_init_destroy_success ...[2024-07-23 15:03:19.459468] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:09:24.034 passed 00:09:24.034 Test: lvs_init_opts_success ...passed 00:09:24.034 Test: lvs_unload_lvs_is_null_fail ...passed 00:09:24.034 Test: lvs_names ...[2024-07-23 15:03:19.459760] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:09:24.034 [2024-07-23 15:03:19.459860] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:09:24.034 [2024-07-23 15:03:19.459911] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:09:24.034 [2024-07-23 15:03:19.460104] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:09:24.034 passed 00:09:24.034 Test: lvol_create_destroy_success ...passed 00:09:24.034 Test: lvol_create_fail ...[2024-07-23 15:03:19.460853] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:09:24.034 [2024-07-23 15:03:19.460994] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:09:24.034 passed 00:09:24.293 Test: lvol_destroy_fail ...passed 00:09:24.293 Test: lvol_close ...[2024-07-23 15:03:19.461384] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:09:24.293 [2024-07-23 15:03:19.461632] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:09:24.293 [2024-07-23 15:03:19.461691] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:09:24.293 passed 00:09:24.293 Test: lvol_resize ...passed 00:09:24.293 Test: lvol_set_read_only ...passed 00:09:24.293 Test: test_lvs_load ...[2024-07-23 15:03:19.462636] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:09:24.293 [2024-07-23 15:03:19.462703] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:09:24.293 passed 00:09:24.293 Test: lvols_load ...[2024-07-23 15:03:19.463037] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:24.293 [2024-07-23 15:03:19.463155] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:24.293 passed 00:09:24.293 Test: lvol_open ...passed 00:09:24.293 Test: lvol_snapshot ...passed 00:09:24.293 Test: lvol_snapshot_fail ...[2024-07-23 15:03:19.463921] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:09:24.293 passed 00:09:24.293 Test: lvol_clone ...passed 00:09:24.293 Test: lvol_clone_fail ...passed 00:09:24.293 Test: lvol_iter_clones ...[2024-07-23 15:03:19.464490] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:09:24.293 passed 00:09:24.294 Test: lvol_refcnt ...[2024-07-23 15:03:19.465157] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol d080bb70-c001-41a6-9c44-43e34fdb8545 because it is still open 00:09:24.294 passed 00:09:24.294 Test: lvol_names ...[2024-07-23 15:03:19.465359] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:24.294 [2024-07-23 15:03:19.465475] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:24.294 passed 00:09:24.294 Test: lvol_create_thin_provisioned ...[2024-07-23 15:03:19.465714] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:09:24.294 passed 00:09:24.294 Test: lvol_rename ...[2024-07-23 15:03:19.466307] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:24.294 passed 00:09:24.294 Test: lvs_rename ...[2024-07-23 15:03:19.466416] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:09:24.294 [2024-07-23 15:03:19.466713] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:09:24.294 passed 00:09:24.294 Test: lvol_inflate ...passed 00:09:24.294 Test: lvol_decouple_parent ...[2024-07-23 15:03:19.467000] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:24.294 passed 00:09:24.294 Test: lvol_get_xattr ...[2024-07-23 15:03:19.467283] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:24.294 passed 00:09:24.294 Test: lvol_esnap_reload ...passed 00:09:24.294 Test: lvol_esnap_create_bad_args ...[2024-07-23 15:03:19.467942] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:09:24.294 [2024-07-23 15:03:19.467997] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:24.294 [2024-07-23 15:03:19.468036] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:09:24.294 [2024-07-23 15:03:19.468106] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:24.294 passed 00:09:24.294 Test: lvol_esnap_create_delete ...[2024-07-23 15:03:19.468265] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:09:24.294 passed 00:09:24.294 Test: lvol_esnap_load_esnaps ...passed 00:09:24.294 Test: lvol_esnap_missing ...[2024-07-23 15:03:19.468608] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:09:24.294 [2024-07-23 15:03:19.468760] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:24.294 [2024-07-23 15:03:19.468834] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:24.294 passed 00:09:24.294 Test: lvol_esnap_hotplug ... 00:09:24.294 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:09:24.294 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:09:24.294 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:09:24.294 [2024-07-23 15:03:19.469663] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol edba92ec-8db4-4f74-a598-7b0ac5a035e1: failed to create esnap bs_dev: error -12 00:09:24.294 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:09:24.294 [2024-07-23 15:03:19.469946] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 4904c3a9-fc72-4645-8be7-92d8258f94db: failed to create esnap bs_dev: error -12 00:09:24.294 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:09:24.294 [2024-07-23 15:03:19.470102] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol ed8487a5-5b41-4770-ac09-85dc35e9c2bc: failed to create esnap bs_dev: error -12 00:09:24.294 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:09:24.294 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:09:24.294 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:09:24.294 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:09:24.294 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:09:24.294 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:09:24.294 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:09:24.294 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:09:24.294 passed 00:09:24.294 Test: lvol_get_by ...passed 00:09:24.294 Test: lvol_shallow_copy ...[2024-07-23 15:03:19.471668] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:09:24.294 passed 00:09:24.294 Test: lvol_set_parent ...[2024-07-23 15:03:19.471736] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 6ebf9181-bc25-4b4f-9d9e-4d134a0fb146 shallow copy, ext_dev must not be NULL 00:09:24.294 [2024-07-23 15:03:19.472049] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:09:24.294 [2024-07-23 15:03:19.472124] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:09:24.294 passed 00:09:24.294 Test: lvol_set_external_parent ...[2024-07-23 15:03:19.472366] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:09:24.294 [2024-07-23 15:03:19.472425] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:09:24.294 [2024-07-23 15:03:19.472463] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:09:24.294 passed 00:09:24.294 00:09:24.294 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.294 suites 1 1 n/a 0 0 00:09:24.294 tests 37 37 37 0 0 00:09:24.294 asserts 1505 1505 1505 0 n/a 00:09:24.294 00:09:24.294 Elapsed time = 0.014 seconds 00:09:24.294 00:09:24.294 real 0m0.066s 00:09:24.294 user 0m0.031s 00:09:24.294 sys 0m0.036s 00:09:24.294 15:03:19 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.294 15:03:19 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:24.294 ************************************ 00:09:24.294 END TEST unittest_lvol 00:09:24.294 ************************************ 00:09:24.294 15:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:24.294 15:03:19 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:24.294 15:03:19 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:24.294 15:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:24.294 15:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.294 15:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.294 ************************************ 00:09:24.294 START TEST unittest_nvme_rdma 00:09:24.294 ************************************ 00:09:24.294 15:03:19 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:24.294 00:09:24.294 00:09:24.294 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.294 http://cunit.sourceforge.net/ 00:09:24.294 00:09:24.294 00:09:24.294 Suite: nvme_rdma 00:09:24.294 Test: test_nvme_rdma_build_sgl_request ...[2024-07-23 15:03:19.581900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:09:24.294 [2024-07-23 15:03:19.582187] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:24.294 passed 00:09:24.294 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:09:24.294 Test: test_nvme_rdma_build_contig_request ...passed 00:09:24.294 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:09:24.294 Test: test_nvme_rdma_create_reqs ...[2024-07-23 15:03:19.582238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:09:24.294 [2024-07-23 15:03:19.582356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:24.294 [2024-07-23 15:03:19.582502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:09:24.294 passed 00:09:24.294 Test: test_nvme_rdma_create_rsps ...passed 00:09:24.294 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-23 15:03:19.583023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:09:24.294 passed 00:09:24.294 Test: test_nvme_rdma_poller_create ...[2024-07-23 15:03:19.583250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:24.294 [2024-07-23 15:03:19.583290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:24.294 passed 00:09:24.294 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:09:24.294 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-23 15:03:19.583499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:09:24.294 passed 00:09:24.294 Test: test_nvme_rdma_req_put_and_get ...passed 00:09:24.294 Test: test_nvme_rdma_req_init ...passed 00:09:24.294 Test: test_nvme_rdma_validate_cm_event ...[2024-07-23 15:03:19.583892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:09:24.294 [2024-07-23 15:03:19.583930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:09:24.294 passed 00:09:24.294 Test: test_nvme_rdma_qpair_init ...passed 00:09:24.294 Test: test_nvme_rdma_qpair_submit_request ...passed 00:09:24.294 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:09:24.294 Test: test_rdma_get_memory_translation ...[2024-07-23 15:03:19.584151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:09:24.294 passed 00:09:24.294 Test: test_get_rdma_qpair_from_wc ...passed 00:09:24.294 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:09:24.294 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-23 15:03:19.584207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:09:24.294 [2024-07-23 15:03:19.584322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:24.294 [2024-07-23 15:03:19.584358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:24.295 passed 00:09:24.295 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-23 15:03:19.584551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:24.295 [2024-07-23 15:03:19.584620] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:09:24.295 [2024-07-23 15:03:19.584655] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x70608d913200 on poll group 0x50c000000040 00:09:24.295 [2024-07-23 15:03:19.584711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:24.295 [2024-07-23 15:03:19.584755] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:09:24.295 [2024-07-23 15:03:19.584836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable passed 00:09:24.295 00:09:24.295 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.295 suites 1 1 n/a 0 0 00:09:24.295 tests 21 21 21 0 0 00:09:24.295 asserts 397 397 397 0 n/a 00:09:24.295 00:09:24.295 Elapsed time = 0.003 seconds 00:09:24.295 to find a cq for qpair 0x70608d913200 on poll group 0x50c000000040 00:09:24.295 [2024-07-23 15:03:19.584925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:24.295 00:09:24.295 real 0m0.047s 00:09:24.295 user 0m0.026s 00:09:24.295 sys 0m0.022s 00:09:24.295 15:03:19 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.295 ************************************ 00:09:24.295 END TEST unittest_nvme_rdma 00:09:24.295 ************************************ 00:09:24.295 15:03:19 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:24.295 15:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:24.295 15:03:19 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:24.295 15:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:24.295 15:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.295 15:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.295 ************************************ 00:09:24.295 START TEST unittest_nvmf_transport 00:09:24.295 ************************************ 00:09:24.295 15:03:19 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:24.295 00:09:24.295 00:09:24.295 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.295 http://cunit.sourceforge.net/ 00:09:24.295 00:09:24.295 00:09:24.295 Suite: nvmf 00:09:24.295 Test: test_spdk_nvmf_transport_create ...[2024-07-23 15:03:19.679385] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:09:24.295 [2024-07-23 15:03:19.679656] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:09:24.295 [2024-07-23 15:03:19.679709] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:09:24.295 [2024-07-23 15:03:19.679823] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:09:24.295 passed 00:09:24.295 Test: test_nvmf_transport_poll_group_create ...passed 00:09:24.295 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-23 15:03:19.680156] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:09:24.295 [2024-07-23 15:03:19.680188] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:09:24.295 [2024-07-23 15:03:19.680221] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:09:24.295 passed 00:09:24.295 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:09:24.295 00:09:24.295 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.295 suites 1 1 n/a 0 0 00:09:24.295 tests 4 4 4 0 0 00:09:24.295 asserts 49 49 49 0 n/a 00:09:24.295 00:09:24.295 Elapsed time = 0.001 seconds 00:09:24.295 00:09:24.295 real 0m0.041s 00:09:24.295 user 0m0.019s 00:09:24.295 sys 0m0.022s 00:09:24.295 15:03:19 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.295 15:03:19 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:09:24.295 ************************************ 00:09:24.295 END TEST unittest_nvmf_transport 00:09:24.295 ************************************ 00:09:24.553 15:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:24.553 15:03:19 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:24.553 15:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:24.553 15:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.553 15:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.553 ************************************ 00:09:24.553 START TEST unittest_rdma 00:09:24.553 ************************************ 00:09:24.553 15:03:19 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:24.553 00:09:24.553 00:09:24.553 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.553 http://cunit.sourceforge.net/ 00:09:24.553 00:09:24.553 00:09:24.553 Suite: rdma_common 00:09:24.553 Test: test_spdk_rdma_pd ...[2024-07-23 15:03:19.774081] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:09:24.553 [2024-07-23 15:03:19.774462] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:09:24.553 passed 00:09:24.553 00:09:24.553 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.553 suites 1 1 n/a 0 0 00:09:24.553 tests 1 1 1 0 0 00:09:24.553 asserts 31 31 31 0 n/a 00:09:24.553 00:09:24.553 Elapsed time = 0.001 seconds 00:09:24.553 00:09:24.553 real 0m0.036s 00:09:24.553 user 0m0.016s 00:09:24.553 sys 0m0.021s 00:09:24.553 15:03:19 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.553 15:03:19 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:24.553 ************************************ 00:09:24.553 END TEST unittest_rdma 00:09:24.553 ************************************ 00:09:24.553 15:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:24.554 15:03:19 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:24.554 15:03:19 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:24.554 15:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:24.554 15:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.554 15:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:24.554 ************************************ 00:09:24.554 START TEST unittest_nvme_cuse 00:09:24.554 ************************************ 00:09:24.554 15:03:19 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:24.554 00:09:24.554 00:09:24.554 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.554 http://cunit.sourceforge.net/ 00:09:24.554 00:09:24.554 00:09:24.554 Suite: nvme_cuse 00:09:24.554 Test: test_cuse_nvme_submit_io_read_write ...passed 00:09:24.554 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:09:24.554 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:09:24.554 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:09:24.554 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:09:24.554 Test: test_cuse_nvme_submit_io ...[2024-07-23 15:03:19.871148] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:09:24.554 passed 00:09:24.554 Test: test_cuse_nvme_reset ...passed 00:09:24.554 Test: test_nvme_cuse_stop ...[2024-07-23 15:03:19.871423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:09:25.121 passed 00:09:25.121 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:09:25.121 00:09:25.121 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.121 suites 1 1 n/a 0 0 00:09:25.121 tests 9 9 9 0 0 00:09:25.121 asserts 118 118 118 0 n/a 00:09:25.121 00:09:25.121 Elapsed time = 0.504 seconds 00:09:25.121 00:09:25.121 real 0m0.540s 00:09:25.121 user 0m0.226s 00:09:25.121 sys 0m0.315s 00:09:25.121 15:03:20 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.121 ************************************ 00:09:25.121 END TEST unittest_nvme_cuse 00:09:25.121 ************************************ 00:09:25.121 15:03:20 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:09:25.121 15:03:20 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:25.121 15:03:20 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:09:25.121 15:03:20 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.121 15:03:20 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.121 15:03:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:25.121 ************************************ 00:09:25.121 START TEST unittest_nvmf 00:09:25.121 ************************************ 00:09:25.121 15:03:20 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:09:25.121 15:03:20 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:09:25.121 00:09:25.121 00:09:25.121 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.121 http://cunit.sourceforge.net/ 00:09:25.121 00:09:25.121 00:09:25.121 Suite: nvmf 00:09:25.121 Test: test_get_log_page ...passed 00:09:25.121 Test: test_process_fabrics_cmd ...[2024-07-23 15:03:20.461839] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:09:25.121 [2024-07-23 15:03:20.462078] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:09:25.121 passed 00:09:25.121 Test: test_connect ...[2024-07-23 15:03:20.462763] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:09:25.121 [2024-07-23 15:03:20.462841] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:09:25.121 [2024-07-23 15:03:20.462874] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:09:25.121 [2024-07-23 15:03:20.462908] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:09:25.121 [2024-07-23 15:03:20.462943] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:09:25.122 [2024-07-23 15:03:20.462976] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:09:25.122 [2024-07-23 15:03:20.463013] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:09:25.122 [2024-07-23 15:03:20.463040] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:09:25.122 [2024-07-23 15:03:20.463147] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:09:25.122 [2024-07-23 15:03:20.463216] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:09:25.122 [2024-07-23 15:03:20.463494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:09:25.122 [2024-07-23 15:03:20.463570] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:09:25.122 [2024-07-23 15:03:20.463633] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:09:25.122 [2024-07-23 15:03:20.463700] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:09:25.122 [2024-07-23 15:03:20.463783] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:09:25.122 passed 00:09:25.122 Test: test_get_ns_id_desc_list ...[2024-07-23 15:03:20.463946] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:09:25.122 [2024-07-23 15:03:20.463989] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:09:25.122 passed 00:09:25.122 Test: test_identify_ns ...[2024-07-23 15:03:20.464261] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:25.122 [2024-07-23 15:03:20.464480] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:09:25.122 [2024-07-23 15:03:20.464578] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:09:25.122 passed 00:09:25.122 Test: test_identify_ns_iocs_specific ...[2024-07-23 15:03:20.464700] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:25.122 [2024-07-23 15:03:20.464935] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:25.122 passed 00:09:25.122 Test: test_reservation_write_exclusive ...passed 00:09:25.122 Test: test_reservation_exclusive_access ...passed 00:09:25.122 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:09:25.122 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:09:25.122 Test: test_reservation_notification_log_page ...passed 00:09:25.122 Test: test_get_dif_ctx ...passed 00:09:25.122 Test: test_set_get_features ...passed 00:09:25.122 Test: test_identify_ctrlr ...passed 00:09:25.122 Test: test_identify_ctrlr_iocs_specific ...[2024-07-23 15:03:20.465383] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:25.122 [2024-07-23 15:03:20.465424] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:25.122 [2024-07-23 15:03:20.465451] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:09:25.122 [2024-07-23 15:03:20.465482] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:09:25.122 passed 00:09:25.122 Test: test_custom_admin_cmd ...passed 00:09:25.122 Test: test_fused_compare_and_write ...[2024-07-23 15:03:20.465953] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4249:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:09:25.122 passed 00:09:25.122 Test: test_multi_async_event_reqs ...passed 00:09:25.122 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:09:25.122 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:09:25.122 Test: test_multi_async_events ...[2024-07-23 15:03:20.465993] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:25.122 [2024-07-23 15:03:20.466023] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4256:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:25.122 passed 00:09:25.122 Test: test_rae ...passed 00:09:25.122 Test: test_nvmf_ctrlr_create_destruct ...passed 00:09:25.122 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:09:25.122 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-23 15:03:20.466532] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:09:25.122 passed 00:09:25.122 Test: test_zcopy_read ...passed 00:09:25.122 Test: test_zcopy_write ...passed 00:09:25.122 Test: test_nvmf_property_set ...passed 00:09:25.122 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-23 15:03:20.466582] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:09:25.122 passed 00:09:25.122 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:09:25.122 Test: test_nvmf_ctrlr_ns_attachment ...[2024-07-23 15:03:20.466800] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:25.122 [2024-07-23 15:03:20.466850] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:25.122 [2024-07-23 15:03:20.466894] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1970:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:09:25.122 [2024-07-23 15:03:20.466915] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1976:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:09:25.122 [2024-07-23 15:03:20.466933] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:25.122 [2024-07-23 15:03:20.466959] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:25.122 passed 00:09:25.122 Test: test_nvmf_check_qpair_active ...[2024-07-23 15:03:20.467146] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:09:25.122 [2024-07-23 15:03:20.467180] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4755:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:09:25.122 [2024-07-23 15:03:20.467202] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:09:25.122 passed 00:09:25.122 00:09:25.122 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.122 suites 1 1 n/a 0 0 00:09:25.122 tests 32 32 32 0 0 00:09:25.122 asserts 983 983 983 0 n/a 00:09:25.122 00:09:25.122 Elapsed time = 0.006 seconds 00:09:25.122 [2024-07-23 15:03:20.467229] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:09:25.122 [2024-07-23 15:03:20.467241] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:09:25.122 15:03:20 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:09:25.122 00:09:25.122 00:09:25.122 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.122 http://cunit.sourceforge.net/ 00:09:25.122 00:09:25.122 00:09:25.122 Suite: nvmf 00:09:25.122 Test: test_get_rw_params ...passed 00:09:25.122 Test: test_get_rw_ext_params ...passed 00:09:25.122 Test: test_lba_in_range ...passed 00:09:25.122 Test: test_get_dif_ctx ...passed 00:09:25.122 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:09:25.122 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-23 15:03:20.505406] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:09:25.122 passed 00:09:25.122 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-23 15:03:20.505739] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:09:25.122 [2024-07-23 15:03:20.505814] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:09:25.122 passed 00:09:25.122 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-23 15:03:20.505902] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:09:25.122 [2024-07-23 15:03:20.505962] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:09:25.122 [2024-07-23 15:03:20.506029] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:09:25.122 [2024-07-23 15:03:20.506083] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:09:25.122 passed 00:09:25.122 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:09:25.122 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed[2024-07-23 15:03:20.506146] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:09:25.122 [2024-07-23 15:03:20.506196] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:09:25.122 00:09:25.122 00:09:25.122 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.122 suites 1 1 n/a 0 0 00:09:25.122 tests 10 10 10 0 0 00:09:25.122 asserts 159 159 159 0 n/a 00:09:25.122 00:09:25.122 Elapsed time = 0.001 seconds 00:09:25.122 15:03:20 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:09:25.382 00:09:25.382 00:09:25.382 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.382 http://cunit.sourceforge.net/ 00:09:25.382 00:09:25.382 00:09:25.382 Suite: nvmf 00:09:25.382 Test: test_discovery_log ...passed 00:09:25.382 Test: test_discovery_log_with_filters ...passed 00:09:25.382 00:09:25.382 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.382 suites 1 1 n/a 0 0 00:09:25.382 tests 2 2 2 0 0 00:09:25.382 asserts 238 238 238 0 n/a 00:09:25.382 00:09:25.382 Elapsed time = 0.003 seconds 00:09:25.382 15:03:20 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:09:25.382 00:09:25.382 00:09:25.382 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.382 http://cunit.sourceforge.net/ 00:09:25.382 00:09:25.382 00:09:25.382 Suite: nvmf 00:09:25.382 Test: nvmf_test_create_subsystem ...[2024-07-23 15:03:20.604416] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:09:25.382 [2024-07-23 15:03:20.604715] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:09:25.382 [2024-07-23 15:03:20.604898] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:09:25.382 [2024-07-23 15:03:20.604937] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:09:25.382 [2024-07-23 15:03:20.604987] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:09:25.382 [2024-07-23 15:03:20.605029] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:09:25.382 [2024-07-23 15:03:20.605085] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:09:25.382 [2024-07-23 15:03:20.605128] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:09:25.382 [2024-07-23 15:03:20.605168] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:09:25.382 [2024-07-23 15:03:20.605216] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:09:25.382 [2024-07-23 15:03:20.605267] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:09:25.382 [2024-07-23 15:03:20.605309] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:09:25.382 [2024-07-23 15:03:20.605454] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:09:25.382 [2024-07-23 15:03:20.605509] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:09:25.382 [2024-07-23 15:03:20.605651] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:09:25.382 [2024-07-23 15:03:20.605689] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:09:25.382 [2024-07-23 15:03:20.605827] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:09:25.382 passed 00:09:25.382 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-23 15:03:20.605863] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:09:25.382 [2024-07-23 15:03:20.605915] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:25.382 [2024-07-23 15:03:20.605956] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:09:25.382 [2024-07-23 15:03:20.606008] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:25.382 [2024-07-23 15:03:20.606030] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:09:25.382 passed 00:09:25.382 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-23 15:03:20.606497] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:09:25.382 [2024-07-23 15:03:20.606539] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:09:25.383 [2024-07-23 15:03:20.606736] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2161:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:09:25.383 passed 00:09:25.383 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:09:25.383 Test: test_spdk_nvmf_ns_visible ...passed 00:09:25.383 Test: test_reservation_register ...passed 00:09:25.383 Test: test_reservation_register_with_ptpl ...[2024-07-23 15:03:20.607161] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:09:25.383 [2024-07-23 15:03:20.607627] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:25.383 [2024-07-23 15:03:20.607751] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3164:nvmf_ns_reservation_register: *ERROR*: No registrant 00:09:25.383 passed 00:09:25.383 Test: test_reservation_acquire_preempt_1 ...passed 00:09:25.383 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-23 15:03:20.608827] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:25.383 passed 00:09:25.383 Test: test_reservation_release ...passed 00:09:25.383 Test: test_reservation_unregister_notification ...[2024-07-23 15:03:20.611050] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:25.383 passed 00:09:25.383 Test: test_reservation_release_notification ...[2024-07-23 15:03:20.611354] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:25.383 passed 00:09:25.383 Test: test_reservation_release_notification_write_exclusive ...[2024-07-23 15:03:20.611685] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:25.383 passed 00:09:25.383 Test: test_reservation_clear_notification ...[2024-07-23 15:03:20.611947] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:25.383 [2024-07-23 15:03:20.612173] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:25.383 passed 00:09:25.383 Test: test_reservation_preempt_notification ...passed 00:09:25.383 Test: test_spdk_nvmf_ns_event ...[2024-07-23 15:03:20.612406] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:25.383 passed 00:09:25.383 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:09:25.383 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:09:25.383 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:09:25.383 Test: test_nvmf_ns_reservation_report ...[2024-07-23 15:03:20.613254] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:09:25.383 [2024-07-23 15:03:20.613319] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:09:25.383 [2024-07-23 15:03:20.613455] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3469:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:09:25.383 passed 00:09:25.383 Test: test_nvmf_nqn_is_valid ...passed 00:09:25.383 Test: test_nvmf_ns_reservation_restore ...[2024-07-23 15:03:20.613531] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:09:25.383 [2024-07-23 15:03:20.613563] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:222cb346-e8c2-4e2d-a26e-9524043e541": uuid is not the correct length 00:09:25.383 [2024-07-23 15:03:20.613595] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:09:25.383 passed 00:09:25.383 Test: test_nvmf_subsystem_state_change ...[2024-07-23 15:03:20.613682] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2663:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:09:25.383 passed 00:09:25.383 Test: test_nvmf_reservation_custom_ops ...passed 00:09:25.383 00:09:25.383 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.383 suites 1 1 n/a 0 0 00:09:25.383 tests 24 24 24 0 0 00:09:25.383 asserts 499 499 499 0 n/a 00:09:25.383 00:09:25.383 Elapsed time = 0.010 seconds 00:09:25.383 15:03:20 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:09:25.383 00:09:25.383 00:09:25.383 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.383 http://cunit.sourceforge.net/ 00:09:25.383 00:09:25.383 00:09:25.383 Suite: nvmf 00:09:25.383 Test: test_nvmf_tcp_create ...passed 00:09:25.383 Test: test_nvmf_tcp_destroy ...[2024-07-23 15:03:20.700986] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 750:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:09:25.383 passed 00:09:25.383 Test: test_nvmf_tcp_poll_group_create ...passed 00:09:25.642 Test: test_nvmf_tcp_send_c2h_data ...passed 00:09:25.642 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:09:25.642 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:09:25.642 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:09:25.642 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:09:25.642 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:09:25.642 Test: test_nvmf_tcp_icreq_handle ...passed 00:09:25.642 Test: test_nvmf_tcp_check_xfer_type ...passed 00:09:25.642 Test: test_nvmf_tcp_invalid_sgl ...passed 00:09:25.642 Test: test_nvmf_tcp_pdu_ch_handle ...passed 00:09:25.642 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-23 15:03:20.856877] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.856986] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533d0b020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857021] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533d0b020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857067] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.857103] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533d0b020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857235] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:25.642 [2024-07-23 15:03:20.857288] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.857343] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533d0d180 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857362] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:25.642 [2024-07-23 15:03:20.857402] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533d0d180 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857439] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.857479] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533d0d180 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857522] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.857563] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533d0d180 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857634] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2563:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:09:25.642 [2024-07-23 15:03:20.857659] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.857686] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533d116c0 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857744] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2295:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7a2533c0c8c0 00:09:25.642 [2024-07-23 15:03:20.857798] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.857841] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.857879] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2352:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7a2533c0c020 00:09:25.642 [2024-07-23 15:03:20.857922] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.857957] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.858005] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2305:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:09:25.642 [2024-07-23 15:03:20.858043] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.858083] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.858119] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2344:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:09:25.642 [2024-07-23 15:03:20.858163] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.858197] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.858247] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.858273] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.858318] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.858349] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.858385] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.858419] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.858470] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.858506] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.858546] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.858581] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 [2024-07-23 15:03:20.858638] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:25.642 [2024-07-23 15:03:20.858674] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a2533c0c020 is same with the state(5) to be set 00:09:25.642 passed 00:09:25.642 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:09:25.642 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-23 15:03:20.900693] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:09:25.642 [2024-07-23 15:03:20.900779] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:09:25.642 passed 00:09:25.642 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-23 15:03:20.901954] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:09:25.642 [2024-07-23 15:03:20.902008] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:09:25.642 passed 00:09:25.642 00:09:25.642 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.642 suites 1 1 n/a 0 0 00:09:25.642 tests 17 17 17 0 0 00:09:25.642 asserts 222 222 222 0 n/a 00:09:25.642 00:09:25.642 Elapsed time = 0.232 seconds 00:09:25.642 [2024-07-23 15:03:20.902711] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:09:25.642 [2024-07-23 15:03:20.902755] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:09:25.642 15:03:20 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:09:25.642 00:09:25.642 00:09:25.642 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.642 http://cunit.sourceforge.net/ 00:09:25.642 00:09:25.642 00:09:25.642 Suite: nvmf 00:09:25.642 Test: test_nvmf_tgt_create_poll_group ...passed 00:09:25.642 00:09:25.642 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.642 suites 1 1 n/a 0 0 00:09:25.642 tests 1 1 1 0 0 00:09:25.642 asserts 17 17 17 0 n/a 00:09:25.642 00:09:25.642 Elapsed time = 0.035 seconds 00:09:25.900 ************************************ 00:09:25.900 END TEST unittest_nvmf 00:09:25.900 ************************************ 00:09:25.900 00:09:25.900 real 0m0.673s 00:09:25.900 user 0m0.271s 00:09:25.900 sys 0m0.398s 00:09:25.900 15:03:21 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.900 15:03:21 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:09:25.900 15:03:21 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:25.900 15:03:21 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:25.900 15:03:21 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:25.900 15:03:21 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:25.900 15:03:21 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.900 15:03:21 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.900 15:03:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:25.900 ************************************ 00:09:25.900 START TEST unittest_nvmf_rdma 00:09:25.900 ************************************ 00:09:25.900 15:03:21 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:25.900 00:09:25.900 00:09:25.900 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.900 http://cunit.sourceforge.net/ 00:09:25.900 00:09:25.900 00:09:25.900 Suite: nvmf 00:09:25.900 Test: test_spdk_nvmf_rdma_request_parse_sgl ...passed 00:09:25.900 Test: test_spdk_nvmf_rdma_request_process ...[2024-07-23 15:03:21.193068] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:09:25.900 [2024-07-23 15:03:21.193295] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:09:25.900 [2024-07-23 15:03:21.193335] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:09:25.900 passed 00:09:25.900 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:09:25.900 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:09:25.900 Test: test_nvmf_rdma_opts_init ...passed 00:09:25.900 Test: test_nvmf_rdma_request_free_data ...passed 00:09:25.900 Test: test_nvmf_rdma_resources_create ...passed 00:09:25.900 Test: test_nvmf_rdma_qpair_compare ...passed 00:09:25.901 Test: test_nvmf_rdma_resize_cq ...[2024-07-23 15:03:21.195996] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:09:25.901 Using CQ of insufficient size may lead to CQ overrun 00:09:25.901 [2024-07-23 15:03:21.196048] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:09:25.901 passed 00:09:25.901 00:09:25.901 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.901 suites 1 1 n/a 0 0 00:09:25.901 tests 9 9 9 0 0 00:09:25.901 asserts 579 579 579 0 n/a 00:09:25.901 00:09:25.901 Elapsed time = 0.003 seconds 00:09:25.901 [2024-07-23 15:03:21.196095] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:25.901 ************************************ 00:09:25.901 END TEST unittest_nvmf_rdma 00:09:25.901 ************************************ 00:09:25.901 00:09:25.901 real 0m0.050s 00:09:25.901 user 0m0.024s 00:09:25.901 sys 0m0.026s 00:09:25.901 15:03:21 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.901 15:03:21 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:25.901 15:03:21 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:25.901 15:03:21 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:25.901 15:03:21 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:09:25.901 15:03:21 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.901 15:03:21 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.901 15:03:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:25.901 ************************************ 00:09:25.901 START TEST unittest_scsi 00:09:25.901 ************************************ 00:09:25.901 15:03:21 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:09:25.901 15:03:21 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:09:25.901 00:09:25.901 00:09:25.901 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.901 http://cunit.sourceforge.net/ 00:09:25.901 00:09:25.901 00:09:25.901 Suite: dev_suite 00:09:25.901 Test: dev_destruct_null_dev ...passed 00:09:25.901 Test: dev_destruct_zero_luns ...passed 00:09:25.901 Test: dev_destruct_null_lun ...passed 00:09:25.901 Test: dev_destruct_success ...passed 00:09:25.901 Test: dev_construct_num_luns_zero ...passed 00:09:25.901 Test: dev_construct_no_lun_zero ...passed 00:09:25.901 Test: dev_construct_null_lun ...[2024-07-23 15:03:21.288535] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:09:25.901 [2024-07-23 15:03:21.288759] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:09:25.901 passed 00:09:25.901 Test: dev_construct_name_too_long ...passed 00:09:25.901 Test: dev_construct_success ...passed 00:09:25.901 Test: dev_construct_success_lun_zero_not_first ...passed 00:09:25.901 Test: dev_queue_mgmt_task_success ...[2024-07-23 15:03:21.288883] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:09:25.901 [2024-07-23 15:03:21.288932] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:09:25.901 passed 00:09:25.901 Test: dev_queue_task_success ...passed 00:09:25.901 Test: dev_stop_success ...passed 00:09:25.901 Test: dev_add_port_max_ports ...passed 00:09:25.901 Test: dev_add_port_construct_failure1 ...[2024-07-23 15:03:21.289207] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:09:25.901 passed 00:09:25.901 Test: dev_add_port_construct_failure2 ...passed 00:09:25.901 Test: dev_add_port_success1 ...passed 00:09:25.901 Test: dev_add_port_success2 ...passed 00:09:25.901 Test: dev_add_port_success3 ...passed 00:09:25.901 Test: dev_find_port_by_id_num_ports_zero ...passed 00:09:25.901 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:09:25.901 Test: dev_find_port_by_id_success ...passed 00:09:25.901 Test: dev_add_lun_bdev_not_found ...passed 00:09:25.901 Test: dev_add_lun_no_free_lun_id ...[2024-07-23 15:03:21.289253] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:09:25.901 [2024-07-23 15:03:21.289280] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:09:25.901 passed 00:09:25.901 Test: dev_add_lun_success1 ...passed 00:09:25.901 Test: dev_add_lun_success2 ...passed 00:09:25.901 Test: dev_check_pending_tasks ...[2024-07-23 15:03:21.289703] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:09:25.901 passed 00:09:25.901 Test: dev_iterate_luns ...passed 00:09:25.901 Test: dev_find_free_lun ...passed 00:09:25.901 00:09:25.901 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.901 suites 1 1 n/a 0 0 00:09:25.901 tests 29 29 29 0 0 00:09:25.901 asserts 97 97 97 0 n/a 00:09:25.901 00:09:25.901 Elapsed time = 0.002 seconds 00:09:25.901 15:03:21 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:09:26.159 00:09:26.159 00:09:26.159 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.159 http://cunit.sourceforge.net/ 00:09:26.159 00:09:26.159 00:09:26.159 Suite: lun_suite 00:09:26.159 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-23 15:03:21.328743] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task npassedot supported 00:09:26.159 00:09:26.159 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-23 15:03:21.329340] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task spassed 00:09:26.159 Test: lun_task_mgmt_execute_lun_reset ...passed 00:09:26.159 Test: lun_task_mgmt_execute_target_reset ...passed 00:09:26.159 Test: lun_task_mgmt_execute_invalid_case ...passed 00:09:26.159 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:09:26.159 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:09:26.159 Test: lun_append_task_null_lun_not_supported ...passed 00:09:26.159 Test: lun_execute_scsi_task_pending ...passed 00:09:26.159 Test: lun_execute_scsi_task_complete ...passed 00:09:26.159 Test: lun_execute_scsi_task_resize ...passed 00:09:26.159 Test: lun_destruct_success ...passed 00:09:26.159 Test: lun_construct_null_ctx ...passed 00:09:26.159 Test: lun_construct_success ...passed 00:09:26.159 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:09:26.159 Test: lun_reset_task_suspend_scsi_task ...passed 00:09:26.159 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:09:26.159 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:09:26.159 00:09:26.159 et not supported 00:09:26.159 [2024-07-23 15:03:21.329558] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:09:26.159 [2024-07-23 15:03:21.329755] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:09:26.159 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.159 suites 1 1 n/a 0 0 00:09:26.159 tests 18 18 18 0 0 00:09:26.159 asserts 153 153 153 0 n/a 00:09:26.159 00:09:26.159 Elapsed time = 0.001 seconds 00:09:26.159 15:03:21 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:09:26.159 00:09:26.159 00:09:26.159 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.159 http://cunit.sourceforge.net/ 00:09:26.159 00:09:26.159 00:09:26.159 Suite: scsi_suite 00:09:26.159 Test: scsi_init ...passed 00:09:26.159 00:09:26.159 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.159 suites 1 1 n/a 0 0 00:09:26.160 tests 1 1 1 0 0 00:09:26.160 asserts 1 1 1 0 n/a 00:09:26.160 00:09:26.160 Elapsed time = 0.000 seconds 00:09:26.160 15:03:21 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:09:26.160 00:09:26.160 00:09:26.160 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.160 http://cunit.sourceforge.net/ 00:09:26.160 00:09:26.160 00:09:26.160 Suite: translation_suite 00:09:26.160 Test: mode_select_6_test ...passed 00:09:26.160 Test: mode_select_6_test2 ...passed 00:09:26.160 Test: mode_sense_6_test ...passed 00:09:26.160 Test: mode_sense_10_test ...passed 00:09:26.160 Test: inquiry_evpd_test ...passed 00:09:26.160 Test: inquiry_standard_test ...passed 00:09:26.160 Test: inquiry_overflow_test ...passed 00:09:26.160 Test: task_complete_test ...passed 00:09:26.160 Test: lba_range_test ...passed 00:09:26.160 Test: xfer_len_test ...passed 00:09:26.160 Test: xfer_test ...passed 00:09:26.160 Test: scsi_name_padding_test ...passed 00:09:26.160 Test: get_dif_ctx_test ...passed 00:09:26.160 Test: unmap_split_test ...[2024-07-23 15:03:21.406371] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:09:26.160 passed 00:09:26.160 00:09:26.160 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.160 suites 1 1 n/a 0 0 00:09:26.160 tests 14 14 14 0 0 00:09:26.160 asserts 1205 1205 1205 0 n/a 00:09:26.160 00:09:26.160 Elapsed time = 0.007 seconds 00:09:26.160 15:03:21 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:09:26.160 00:09:26.160 00:09:26.160 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.160 http://cunit.sourceforge.net/ 00:09:26.160 00:09:26.160 00:09:26.160 Suite: reservation_suite 00:09:26.160 Test: test_reservation_register ...passed 00:09:26.160 Test: test_reservation_reserve ...passed 00:09:26.160 Test: test_all_registrant_reservation_reserve ...passed 00:09:26.160 Test: test_all_registrant_reservation_access ...[2024-07-23 15:03:21.447189] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:26.160 [2024-07-23 15:03:21.447542] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:26.160 [2024-07-23 15:03:21.447612] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:09:26.160 [2024-07-23 15:03:21.447661] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:09:26.160 [2024-07-23 15:03:21.447734] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:26.160 [2024-07-23 15:03:21.447880] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:26.160 [2024-07-23 15:03:21.447973] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:09:26.160 [2024-07-23 15:03:21.448018] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:09:26.160 passed 00:09:26.160 Test: test_reservation_preempt_non_all_regs ...passed 00:09:26.160 Test: test_reservation_preempt_all_regs ...passed 00:09:26.160 Test: test_reservation_cmds_conflict ...[2024-07-23 15:03:21.448094] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:26.160 [2024-07-23 15:03:21.448156] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:09:26.160 [2024-07-23 15:03:21.448248] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:26.160 [2024-07-23 15:03:21.448378] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:26.160 [2024-07-23 15:03:21.448478] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:09:26.160 [2024-07-23 15:03:21.448532] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:26.160 passed 00:09:26.160 Test: test_scsi2_reserve_release ...passed 00:09:26.160 Test: test_pr_with_scsi2_reserve_release ...passed 00:09:26.160 00:09:26.160 [2024-07-23 15:03:21.448577] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:26.160 [2024-07-23 15:03:21.448615] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:26.160 [2024-07-23 15:03:21.448657] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:26.160 [2024-07-23 15:03:21.448766] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:26.160 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.160 suites 1 1 n/a 0 0 00:09:26.160 tests 9 9 9 0 0 00:09:26.160 asserts 344 344 344 0 n/a 00:09:26.160 00:09:26.160 Elapsed time = 0.002 seconds 00:09:26.160 00:09:26.160 real 0m0.197s 00:09:26.160 user 0m0.091s 00:09:26.160 sys 0m0.107s 00:09:26.160 15:03:21 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.160 15:03:21 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:09:26.160 ************************************ 00:09:26.160 END TEST unittest_scsi 00:09:26.160 ************************************ 00:09:26.160 15:03:21 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:26.160 15:03:21 unittest -- unit/unittest.sh@278 -- # uname -s 00:09:26.160 15:03:21 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:09:26.160 15:03:21 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:09:26.160 15:03:21 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:26.160 15:03:21 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.160 15:03:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:26.160 ************************************ 00:09:26.160 START TEST unittest_sock 00:09:26.160 ************************************ 00:09:26.160 15:03:21 unittest.unittest_sock -- common/autotest_common.sh@1123 -- # unittest_sock 00:09:26.160 15:03:21 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:09:26.160 00:09:26.160 00:09:26.160 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.160 http://cunit.sourceforge.net/ 00:09:26.160 00:09:26.160 00:09:26.160 Suite: sock 00:09:26.160 Test: posix_sock ...passed 00:09:26.160 Test: ut_sock ...passed 00:09:26.160 Test: posix_sock_group ...passed 00:09:26.160 Test: ut_sock_group ...passed 00:09:26.160 Test: posix_sock_group_fairness ...passed 00:09:26.418 Test: _posix_sock_close ...passed 00:09:26.418 Test: sock_get_default_opts ...passed 00:09:26.418 Test: ut_sock_impl_get_set_opts ...passed 00:09:26.418 Test: posix_sock_impl_get_set_opts ...passed 00:09:26.418 Test: ut_sock_map ...passed 00:09:26.418 Test: override_impl_opts ...passed 00:09:26.418 Test: ut_sock_group_get_ctx ...passed 00:09:26.418 Test: posix_get_interface_name ...FAILED 00:09:26.418 1. sock_ut.c:1278 - strcmp(spdk_sock_get_interface_name(csock), "ilo") == 0 00:09:26.418 00:09:26.418 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.418 suites 1 1 n/a 0 0 00:09:26.418 tests 13 13 12 1 0 00:09:26.418 asserts 360 360 359 1 n/a 00:09:26.418 00:09:26.418 Elapsed time = 0.011 seconds 00:09:26.418 15:03:21 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:09:26.418 00:09:26.418 00:09:26.418 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.418 http://cunit.sourceforge.net/ 00:09:26.418 00:09:26.418 00:09:26.418 Suite: posix 00:09:26.418 Test: flush ...passed 00:09:26.418 00:09:26.418 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.418 suites 1 1 n/a 0 0 00:09:26.418 tests 1 1 1 0 0 00:09:26.418 asserts 28 28 28 0 n/a 00:09:26.418 00:09:26.418 Elapsed time = 0.000 seconds 00:09:26.418 15:03:21 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:26.418 00:09:26.418 real 0m0.134s 00:09:26.418 user 0m0.045s 00:09:26.418 sys 0m0.066s 00:09:26.418 15:03:21 unittest.unittest_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.418 15:03:21 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:09:26.418 ************************************ 00:09:26.418 END TEST unittest_sock 00:09:26.418 ************************************ 00:09:26.418 15:03:21 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:26.418 15:03:21 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:26.418 15:03:21 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:26.418 15:03:21 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.418 15:03:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:26.418 ************************************ 00:09:26.418 START TEST unittest_thread 00:09:26.418 ************************************ 00:09:26.418 15:03:21 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:26.418 00:09:26.418 00:09:26.418 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.418 http://cunit.sourceforge.net/ 00:09:26.418 00:09:26.418 00:09:26.418 Suite: io_channel 00:09:26.418 Test: thread_alloc ...passed 00:09:26.418 Test: thread_send_msg ...passed 00:09:26.418 Test: thread_poller ...passed 00:09:26.418 Test: poller_pause ...passed 00:09:26.418 Test: thread_for_each ...passed 00:09:26.418 Test: for_each_channel_remove ...passed 00:09:26.419 Test: for_each_channel_unreg ...[2024-07-23 15:03:21.760664] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x78959b109640 already registered (old:0x513000000200 new:0x5130000003c0) 00:09:26.419 passed 00:09:26.419 Test: thread_name ...passed 00:09:26.419 Test: channel ...[2024-07-23 15:03:21.764456] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x571c7da211c0 00:09:26.419 passed 00:09:26.419 Test: channel_destroy_races ...passed 00:09:26.419 Test: thread_exit_test ...[2024-07-23 15:03:21.769244] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x519000007380 got timeout, and move it to the exited state forcefully 00:09:26.419 passed 00:09:26.419 Test: thread_update_stats_test ...passed 00:09:26.419 Test: nested_channel ...passed 00:09:26.419 Test: device_unregister_and_thread_exit_race ...passed 00:09:26.419 Test: cache_closest_timed_poller ...passed 00:09:26.419 Test: multi_timed_pollers_have_same_expiration ...passed 00:09:26.419 Test: io_device_lookup ...passed 00:09:26.419 Test: spdk_spin ...[2024-07-23 15:03:21.779839] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:26.419 [2024-07-23 15:03:21.780006] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x78959b10a020 00:09:26.419 [2024-07-23 15:03:21.780215] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:26.419 [2024-07-23 15:03:21.781851] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:26.419 [2024-07-23 15:03:21.782002] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x78959b10a020 00:09:26.419 [2024-07-23 15:03:21.782030] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:26.419 [2024-07-23 15:03:21.782048] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x78959b10a020 00:09:26.419 [2024-07-23 15:03:21.782073] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:26.419 [2024-07-23 15:03:21.782106] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x78959b10a020 00:09:26.419 [2024-07-23 15:03:21.782128] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:09:26.419 [2024-07-23 15:03:21.782156] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x78959b10a020 00:09:26.419 passed 00:09:26.419 Test: for_each_channel_and_thread_exit_race ...passed 00:09:26.419 Test: for_each_thread_and_thread_exit_race ...passed 00:09:26.419 00:09:26.419 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.419 suites 1 1 n/a 0 0 00:09:26.419 tests 20 20 20 0 0 00:09:26.419 asserts 409 409 409 0 n/a 00:09:26.419 00:09:26.419 Elapsed time = 0.047 seconds 00:09:26.419 00:09:26.419 real 0m0.097s 00:09:26.419 user 0m0.052s 00:09:26.419 sys 0m0.044s 00:09:26.419 15:03:21 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.419 ************************************ 00:09:26.419 END TEST unittest_thread 00:09:26.419 ************************************ 00:09:26.419 15:03:21 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.678 15:03:21 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:26.678 15:03:21 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:26.678 15:03:21 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:26.678 15:03:21 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.678 15:03:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:26.678 ************************************ 00:09:26.678 START TEST unittest_iobuf 00:09:26.678 ************************************ 00:09:26.678 15:03:21 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:26.678 00:09:26.678 00:09:26.678 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.678 http://cunit.sourceforge.net/ 00:09:26.678 00:09:26.678 00:09:26.678 Suite: io_channel 00:09:26.678 Test: iobuf ...passed 00:09:26.678 Test: iobuf_cache ...[2024-07-23 15:03:21.889392] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:26.678 [2024-07-23 15:03:21.889637] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:26.678 [2024-07-23 15:03:21.889722] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:09:26.678 [2024-07-23 15:03:21.889756] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:26.678 [2024-07-23 15:03:21.889841] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:26.678 [2024-07-23 15:03:21.889876] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:26.678 passed 00:09:26.678 Test: iobuf_priority ...passed 00:09:26.678 00:09:26.678 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.678 suites 1 1 n/a 0 0 00:09:26.678 tests 3 3 3 0 0 00:09:26.678 asserts 131 131 131 0 n/a 00:09:26.678 00:09:26.678 Elapsed time = 0.008 seconds 00:09:26.678 00:09:26.678 real 0m0.046s 00:09:26.678 user 0m0.022s 00:09:26.678 sys 0m0.024s 00:09:26.678 15:03:21 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.678 ************************************ 00:09:26.678 END TEST unittest_iobuf 00:09:26.678 ************************************ 00:09:26.678 15:03:21 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:09:26.678 15:03:21 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:26.678 15:03:21 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:09:26.678 15:03:21 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:26.678 15:03:21 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.678 15:03:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:26.678 ************************************ 00:09:26.678 START TEST unittest_util 00:09:26.678 ************************************ 00:09:26.678 15:03:21 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:09:26.678 15:03:21 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:09:26.678 00:09:26.678 00:09:26.678 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.678 http://cunit.sourceforge.net/ 00:09:26.678 00:09:26.678 00:09:26.678 Suite: base64 00:09:26.678 Test: test_base64_get_encoded_strlen ...passed 00:09:26.678 Test: test_base64_get_decoded_len ...passed 00:09:26.678 Test: test_base64_encode ...passed 00:09:26.678 Test: test_base64_decode ...passed 00:09:26.678 Test: test_base64_urlsafe_encode ...passed 00:09:26.678 Test: test_base64_urlsafe_decode ...passed 00:09:26.678 00:09:26.678 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.678 suites 1 1 n/a 0 0 00:09:26.678 tests 6 6 6 0 0 00:09:26.678 asserts 112 112 112 0 n/a 00:09:26.678 00:09:26.678 Elapsed time = 0.000 seconds 00:09:26.678 15:03:22 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:09:26.678 00:09:26.678 00:09:26.678 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.678 http://cunit.sourceforge.net/ 00:09:26.678 00:09:26.678 00:09:26.678 Suite: bit_array 00:09:26.678 Test: test_1bit ...passed 00:09:26.678 Test: test_64bit ...passed 00:09:26.678 Test: test_find ...passed 00:09:26.678 Test: test_resize ...passed 00:09:26.678 Test: test_errors ...passed 00:09:26.678 Test: test_count ...passed 00:09:26.678 Test: test_mask_store_load ...passed 00:09:26.678 Test: test_mask_clear ...passed 00:09:26.678 00:09:26.678 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.678 suites 1 1 n/a 0 0 00:09:26.678 tests 8 8 8 0 0 00:09:26.678 asserts 5075 5075 5075 0 n/a 00:09:26.678 00:09:26.678 Elapsed time = 0.002 seconds 00:09:26.678 15:03:22 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:09:26.678 00:09:26.678 00:09:26.678 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.678 http://cunit.sourceforge.net/ 00:09:26.678 00:09:26.678 00:09:26.678 Suite: cpuset 00:09:26.678 Test: test_cpuset ...passed 00:09:26.678 Test: test_cpuset_parse ...[2024-07-23 15:03:22.067439] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:09:26.678 [2024-07-23 15:03:22.067829] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:09:26.678 [2024-07-23 15:03:22.067899] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:09:26.678 [2024-07-23 15:03:22.067949] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:09:26.678 [2024-07-23 15:03:22.068000] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:09:26.678 [2024-07-23 15:03:22.068052] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:09:26.678 [2024-07-23 15:03:22.068098] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:09:26.678 [2024-07-23 15:03:22.068148] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:09:26.678 passed 00:09:26.678 Test: test_cpuset_fmt ...passed 00:09:26.678 Test: test_cpuset_foreach ...passed 00:09:26.678 00:09:26.678 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.678 suites 1 1 n/a 0 0 00:09:26.679 tests 4 4 4 0 0 00:09:26.679 asserts 90 90 90 0 n/a 00:09:26.679 00:09:26.679 Elapsed time = 0.003 seconds 00:09:26.679 15:03:22 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:09:26.679 00:09:26.679 00:09:26.679 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.679 http://cunit.sourceforge.net/ 00:09:26.679 00:09:26.679 00:09:26.679 Suite: crc16 00:09:26.679 Test: test_crc16_t10dif ...passed 00:09:26.679 Test: test_crc16_t10dif_seed ...passed 00:09:26.679 Test: test_crc16_t10dif_copy ...passed 00:09:26.679 00:09:26.679 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.679 suites 1 1 n/a 0 0 00:09:26.679 tests 3 3 3 0 0 00:09:26.679 asserts 5 5 5 0 n/a 00:09:26.679 00:09:26.679 Elapsed time = 0.000 seconds 00:09:26.938 15:03:22 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:09:26.938 00:09:26.938 00:09:26.938 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.938 http://cunit.sourceforge.net/ 00:09:26.938 00:09:26.938 00:09:26.938 Suite: crc32_ieee 00:09:26.938 Test: test_crc32_ieee ...passed 00:09:26.938 00:09:26.938 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.938 suites 1 1 n/a 0 0 00:09:26.938 tests 1 1 1 0 0 00:09:26.938 asserts 1 1 1 0 n/a 00:09:26.938 00:09:26.938 Elapsed time = 0.000 seconds 00:09:26.938 15:03:22 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:09:26.938 00:09:26.938 00:09:26.938 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.938 http://cunit.sourceforge.net/ 00:09:26.938 00:09:26.938 00:09:26.938 Suite: crc32c 00:09:26.938 Test: test_crc32c ...passed 00:09:26.938 Test: test_crc32c_nvme ...passed 00:09:26.938 00:09:26.938 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.938 suites 1 1 n/a 0 0 00:09:26.938 tests 2 2 2 0 0 00:09:26.938 asserts 16 16 16 0 n/a 00:09:26.938 00:09:26.938 Elapsed time = 0.000 seconds 00:09:26.938 15:03:22 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:09:26.938 00:09:26.938 00:09:26.938 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.938 http://cunit.sourceforge.net/ 00:09:26.938 00:09:26.938 00:09:26.938 Suite: crc64 00:09:26.938 Test: test_crc64_nvme ...passed 00:09:26.938 00:09:26.938 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.938 suites 1 1 n/a 0 0 00:09:26.938 tests 1 1 1 0 0 00:09:26.938 asserts 4 4 4 0 n/a 00:09:26.938 00:09:26.938 Elapsed time = 0.000 seconds 00:09:26.938 15:03:22 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:09:26.938 00:09:26.938 00:09:26.938 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.938 http://cunit.sourceforge.net/ 00:09:26.938 00:09:26.938 00:09:26.938 Suite: string 00:09:26.938 Test: test_parse_ip_addr ...passed 00:09:26.938 Test: test_str_chomp ...passed 00:09:26.938 Test: test_parse_capacity ...passed 00:09:26.938 Test: test_sprintf_append_realloc ...passed 00:09:26.938 Test: test_strtol ...passed 00:09:26.938 Test: test_strtoll ...passed 00:09:26.938 Test: test_strarray ...passed 00:09:26.938 Test: test_strcpy_replace ...passed 00:09:26.938 00:09:26.938 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.938 suites 1 1 n/a 0 0 00:09:26.938 tests 8 8 8 0 0 00:09:26.938 asserts 161 161 161 0 n/a 00:09:26.938 00:09:26.938 Elapsed time = 0.001 seconds 00:09:26.938 15:03:22 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:09:26.938 00:09:26.938 00:09:26.938 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.938 http://cunit.sourceforge.net/ 00:09:26.938 00:09:26.938 00:09:26.938 Suite: dif 00:09:26.938 Test: dif_generate_and_verify_test ...[2024-07-23 15:03:22.283620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:26.938 [2024-07-23 15:03:22.284082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:26.938 [2024-07-23 15:03:22.284374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:26.938 [2024-07-23 15:03:22.284640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:26.938 [2024-07-23 15:03:22.284920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:26.938 [2024-07-23 15:03:22.285187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:26.938 passed 00:09:26.938 Test: dif_disable_check_test ...[2024-07-23 15:03:22.286177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:26.938 [2024-07-23 15:03:22.286466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:26.938 [2024-07-23 15:03:22.286761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:26.938 passed 00:09:26.938 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-23 15:03:22.287729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:09:26.938 [2024-07-23 15:03:22.288058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:09:26.938 [2024-07-23 15:03:22.288327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:09:26.938 [2024-07-23 15:03:22.288605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:09:26.938 [2024-07-23 15:03:22.288898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:26.938 [2024-07-23 15:03:22.289177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:26.938 [2024-07-23 15:03:22.289447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:26.938 [2024-07-23 15:03:22.289700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:26.938 [2024-07-23 15:03:22.290006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:26.938 [2024-07-23 15:03:22.290332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:26.938 [2024-07-23 15:03:22.290621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:26.938 passed 00:09:26.938 Test: dif_apptag_mask_test ...[2024-07-23 15:03:22.290998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:26.938 [2024-07-23 15:03:22.291292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:26.938 passed 00:09:26.938 Test: dif_sec_8_md_8_error_test ...passed 00:09:26.938 Test: dif_sec_512_md_0_error_test ...[2024-07-23 15:03:22.291511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:09:26.938 passed 00:09:26.938 Test: dif_sec_512_md_16_error_test ...[2024-07-23 15:03:22.291581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:26.938 [2024-07-23 15:03:22.291637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:26.938 passed 00:09:26.938 Test: dif_sec_4096_md_0_8_error_test ...[2024-07-23 15:03:22.291675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:26.938 [2024-07-23 15:03:22.291728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:26.938 passed 00:09:26.938 Test: dif_sec_4100_md_128_error_test ...[2024-07-23 15:03:22.291777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:26.938 [2024-07-23 15:03:22.291840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:26.938 [2024-07-23 15:03:22.291871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:26.938 passed 00:09:26.938 Test: dif_guard_seed_test ...[2024-07-23 15:03:22.291924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:26.938 [2024-07-23 15:03:22.291964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:26.938 passed 00:09:26.938 Test: dif_guard_value_test ...passed 00:09:26.938 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:09:26.938 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:09:26.938 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:26.938 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:26.938 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:26.938 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:09:26.938 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:26.938 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:26.938 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:09:26.938 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:26.939 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:09:26.939 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:09:26.939 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:26.939 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:26.939 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:26.939 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:26.939 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:26.939 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:26.939 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-23 15:03:22.332824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd48, Actual=fd4c 00:09:26.939 [2024-07-23 15:03:22.335129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe25, Actual=fe21 00:09:26.939 [2024-07-23 15:03:22.337393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:26.939 [2024-07-23 15:03:22.339656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:26.939 [2024-07-23 15:03:22.341929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:26.939 [2024-07-23 15:03:22.344168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:26.939 [2024-07-23 15:03:22.346452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=3ab3 00:09:26.939 [2024-07-23 15:03:22.347977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe21, Actual=2641 00:09:26.939 [2024-07-23 15:03:22.349470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab353ed, Actual=1ab753ed 00:09:26.939 [2024-07-23 15:03:22.351735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38534660, Actual=38574660 00:09:26.939 [2024-07-23 15:03:22.354007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:26.939 [2024-07-23 15:03:22.356280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:26.939 [2024-07-23 15:03:22.358542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:26.939 [2024-07-23 15:03:22.360858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:26.939 [2024-07-23 15:03:22.363125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=4f6a211 00:09:26.939 [2024-07-23 15:03:22.364604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38574660, Actual=ed86651a 00:09:27.198 [2024-07-23 15:03:22.366105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.198 [2024-07-23 15:03:22.368342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:09:27.198 [2024-07-23 15:03:22.370577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.372867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.375150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.198 [2024-07-23 15:03:22.377418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.198 [2024-07-23 15:03:22.379699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.198 [2024-07-23 15:03:22.381206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4837a266, Actual=66d3db1848257f04 00:09:27.198 passed 00:09:27.198 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-23 15:03:22.381928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:09:27.198 [2024-07-23 15:03:22.382219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:09:27.198 [2024-07-23 15:03:22.382512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.382847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.383147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.383450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.383814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ab3 00:09:27.198 [2024-07-23 15:03:22.384071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2641 00:09:27.198 [2024-07-23 15:03:22.384329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:09:27.198 [2024-07-23 15:03:22.384591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:09:27.198 [2024-07-23 15:03:22.384856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.385153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.385455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.385746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.386070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4f6a211 00:09:27.198 [2024-07-23 15:03:22.386331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ed86651a 00:09:27.198 [2024-07-23 15:03:22.386590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.198 [2024-07-23 15:03:22.386935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:09:27.198 [2024-07-23 15:03:22.387236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.387535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.387884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.388212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.388507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.198 [2024-07-23 15:03:22.388772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=66d3db1848257f04 00:09:27.198 passed 00:09:27.198 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-23 15:03:22.389138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:09:27.198 [2024-07-23 15:03:22.389435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:09:27.198 [2024-07-23 15:03:22.389714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.390008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.390328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.390616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.390950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ab3 00:09:27.198 [2024-07-23 15:03:22.391229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2641 00:09:27.198 [2024-07-23 15:03:22.391505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:09:27.198 [2024-07-23 15:03:22.391793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:09:27.198 [2024-07-23 15:03:22.392092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.392370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.392654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.392957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.393224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4f6a211 00:09:27.198 [2024-07-23 15:03:22.393476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ed86651a 00:09:27.198 [2024-07-23 15:03:22.393734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.198 [2024-07-23 15:03:22.394064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:09:27.198 [2024-07-23 15:03:22.394358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.394658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.394978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.395262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.395561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.198 [2024-07-23 15:03:22.395870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=66d3db1848257f04 00:09:27.198 passed 00:09:27.198 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-23 15:03:22.396200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:09:27.198 [2024-07-23 15:03:22.396494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:09:27.198 [2024-07-23 15:03:22.396809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.397087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.198 [2024-07-23 15:03:22.397394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.397676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.198 [2024-07-23 15:03:22.397995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ab3 00:09:27.198 [2024-07-23 15:03:22.398248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2641 00:09:27.198 [2024-07-23 15:03:22.398518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:09:27.198 [2024-07-23 15:03:22.398849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:09:27.199 [2024-07-23 15:03:22.399155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.399423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.399712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.400004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.400287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4f6a211 00:09:27.199 [2024-07-23 15:03:22.400537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ed86651a 00:09:27.199 [2024-07-23 15:03:22.400818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.199 [2024-07-23 15:03:22.401116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:09:27.199 [2024-07-23 15:03:22.401421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.401718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.402040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.402322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.402639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.199 [2024-07-23 15:03:22.402879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=66d3db1848257f04 00:09:27.199 passed 00:09:27.199 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-23 15:03:22.403119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:09:27.199 [2024-07-23 15:03:22.403347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:09:27.199 [2024-07-23 15:03:22.403576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.403818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.404049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.404266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.404491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ab3 00:09:27.199 [2024-07-23 15:03:22.404685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2641 00:09:27.199 passed 00:09:27.199 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-23 15:03:22.404935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:09:27.199 [2024-07-23 15:03:22.405144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:09:27.199 [2024-07-23 15:03:22.405371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.405584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.405837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.406052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.406281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4f6a211 00:09:27.199 [2024-07-23 15:03:22.406496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ed86651a 00:09:27.199 [2024-07-23 15:03:22.406725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.199 [2024-07-23 15:03:22.406975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:09:27.199 [2024-07-23 15:03:22.407214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.407443] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.407667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.407906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.408126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.199 [2024-07-23 15:03:22.408324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=66d3db1848257f04 00:09:27.199 passed 00:09:27.199 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-23 15:03:22.408573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd48, Actual=fd4c 00:09:27.199 [2024-07-23 15:03:22.408802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe25, Actual=fe21 00:09:27.199 [2024-07-23 15:03:22.409022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.409227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.409463] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.409677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.409929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ab3 00:09:27.199 [2024-07-23 15:03:22.410119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2641 00:09:27.199 passed 00:09:27.199 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-23 15:03:22.410340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab353ed, Actual=1ab753ed 00:09:27.199 [2024-07-23 15:03:22.410574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38534660, Actual=38574660 00:09:27.199 [2024-07-23 15:03:22.410819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.411045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.411272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.411485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.411722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=4f6a211 00:09:27.199 [2024-07-23 15:03:22.411934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ed86651a 00:09:27.199 [2024-07-23 15:03:22.412164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.199 passed 00:09:27.199 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:09:27.199 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...[2024-07-23 15:03:22.412385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4833a266, Actual=88010a2d4837a266 00:09:27.199 [2024-07-23 15:03:22.412610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.412826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.413048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.413260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40058 00:09:27.199 [2024-07-23 15:03:22.413476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.199 [2024-07-23 15:03:22.413661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=66d3db1848257f04 00:09:27.199 passed 00:09:27.199 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:27.199 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:27.199 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:27.199 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:27.199 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:27.199 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:27.199 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:27.199 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-23 15:03:22.446879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd48, Actual=fd4c 00:09:27.199 [2024-07-23 15:03:22.447727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=f2e, Actual=f2a 00:09:27.199 [2024-07-23 15:03:22.448552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.449362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.450180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.199 [2024-07-23 15:03:22.451004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.199 [2024-07-23 15:03:22.451842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=3ab3 00:09:27.199 [2024-07-23 15:03:22.452647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=bebb 00:09:27.199 [2024-07-23 15:03:22.453472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab353ed, Actual=1ab753ed 00:09:27.199 [2024-07-23 15:03:22.454286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=7cc5d636, Actual=7cc1d636 00:09:27.199 [2024-07-23 15:03:22.455106] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.455917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.456729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.199 [2024-07-23 15:03:22.457540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.199 [2024-07-23 15:03:22.458363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=4f6a211 00:09:27.199 [2024-07-23 15:03:22.459179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=ac5a5295 00:09:27.199 [2024-07-23 15:03:22.460011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.199 [2024-07-23 15:03:22.460847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=b3bc5ebfcfb91563, Actual=b3bc5ebfcfbd1563 00:09:27.199 [2024-07-23 15:03:22.461664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.462481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.463346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.199 [2024-07-23 15:03:22.464186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.199 [2024-07-23 15:03:22.465051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.199 passed 00:09:27.199 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-23 15:03:22.465879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=4667a5be766ae397 00:09:27.199 [2024-07-23 15:03:22.466134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:09:27.199 [2024-07-23 15:03:22.466327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:09:27.199 [2024-07-23 15:03:22.466523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.466749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.466970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.199 [2024-07-23 15:03:22.467165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.199 [2024-07-23 15:03:22.467361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=3ab3 00:09:27.199 [2024-07-23 15:03:22.467565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=4aa0 00:09:27.199 [2024-07-23 15:03:22.467769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab353ed, Actual=1ab753ed 00:09:27.199 [2024-07-23 15:03:22.467976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9ef3f7b4, Actual=9ef7f7b4 00:09:27.199 [2024-07-23 15:03:22.468186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.468388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.468586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.199 [2024-07-23 15:03:22.468808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.199 [2024-07-23 15:03:22.469020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=4f6a211 00:09:27.199 [2024-07-23 15:03:22.469221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=4e6c7317 00:09:27.199 [2024-07-23 15:03:22.469423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.199 [2024-07-23 15:03:22.469611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5321ca5ff0851ef9, Actual=5321ca5ff0811ef9 00:09:27.199 [2024-07-23 15:03:22.469817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.470006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.199 [2024-07-23 15:03:22.470202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.199 [2024-07-23 15:03:22.470391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.199 [2024-07-23 15:03:22.470596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.199 [2024-07-23 15:03:22.470829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=a6fa315e4956e80d 00:09:27.199 passed 00:09:27.199 Test: dix_sec_0_md_8_error ...passed 00:09:27.199 Test: dix_sec_512_md_0_error ...passed 00:09:27.199 Test: dix_sec_512_md_16_error ...[2024-07-23 15:03:22.470870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:09:27.199 [2024-07-23 15:03:22.470894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:27.199 [2024-07-23 15:03:22.470922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:27.199 passed 00:09:27.199 Test: dix_sec_4096_md_0_8_error ...[2024-07-23 15:03:22.470948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:09:27.199 [2024-07-23 15:03:22.470979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:27.199 [2024-07-23 15:03:22.471005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:27.199 [2024-07-23 15:03:22.471029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:27.199 passed 00:09:27.199 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-23 15:03:22.471051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:27.199 passed 00:09:27.199 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:27.199 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:27.199 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:27.199 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:27.199 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:27.199 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:27.199 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:27.199 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:27.199 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-23 15:03:22.503653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd48, Actual=fd4c 00:09:27.200 [2024-07-23 15:03:22.504502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=f2e, Actual=f2a 00:09:27.200 [2024-07-23 15:03:22.505353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.506189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.507043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.200 [2024-07-23 15:03:22.507900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.200 [2024-07-23 15:03:22.508725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=3ab3 00:09:27.200 [2024-07-23 15:03:22.509588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=bebb 00:09:27.200 [2024-07-23 15:03:22.510433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab353ed, Actual=1ab753ed 00:09:27.200 [2024-07-23 15:03:22.511278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=7cc5d636, Actual=7cc1d636 00:09:27.200 [2024-07-23 15:03:22.512116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.512953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.513799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.200 [2024-07-23 15:03:22.514614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.200 [2024-07-23 15:03:22.515479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=4f6a211 00:09:27.200 [2024-07-23 15:03:22.516300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=ac5a5295 00:09:27.200 [2024-07-23 15:03:22.517150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.200 [2024-07-23 15:03:22.518012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=b3bc5ebfcfb91563, Actual=b3bc5ebfcfbd1563 00:09:27.200 [2024-07-23 15:03:22.518871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.519716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.520542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.200 [2024-07-23 15:03:22.521398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4005b 00:09:27.200 [2024-07-23 15:03:22.522252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.200 passed 00:09:27.200 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-23 15:03:22.523100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=4667a5be766ae397 00:09:27.200 [2024-07-23 15:03:22.523416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd48, Actual=fd4c 00:09:27.200 [2024-07-23 15:03:22.523625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fb35, Actual=fb31 00:09:27.200 [2024-07-23 15:03:22.523854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.524050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.524246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.200 [2024-07-23 15:03:22.524452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.200 [2024-07-23 15:03:22.524639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=3ab3 00:09:27.200 [2024-07-23 15:03:22.524860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=4aa0 00:09:27.200 [2024-07-23 15:03:22.525075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab353ed, Actual=1ab753ed 00:09:27.200 [2024-07-23 15:03:22.525267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9ef3f7b4, Actual=9ef7f7b4 00:09:27.200 [2024-07-23 15:03:22.525475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.525668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.525875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.200 [2024-07-23 15:03:22.526086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.200 [2024-07-23 15:03:22.526293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=4f6a211 00:09:27.200 [2024-07-23 15:03:22.526476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=4e6c7317 00:09:27.200 [2024-07-23 15:03:22.526697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ec820d3, Actual=a576a7728ecc20d3 00:09:27.200 [2024-07-23 15:03:22.526916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5321ca5ff0851ef9, Actual=5321ca5ff0811ef9 00:09:27.200 [2024-07-23 15:03:22.527118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.527308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8c 00:09:27.200 [2024-07-23 15:03:22.527513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.200 [2024-07-23 15:03:22.527719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40059 00:09:27.200 [2024-07-23 15:03:22.527948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=874147d07e8b00ac 00:09:27.200 [2024-07-23 15:03:22.528140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=a6fa315e4956e80d 00:09:27.200 passed 00:09:27.200 Test: set_md_interleave_iovs_test ...passed 00:09:27.200 Test: set_md_interleave_iovs_split_test ...passed 00:09:27.200 Test: dif_generate_stream_pi_16_test ...passed 00:09:27.200 Test: dif_generate_stream_test ...passed 00:09:27.200 Test: set_md_interleave_iovs_alignment_test ...passed 00:09:27.200 Test: dif_generate_split_test ...[2024-07-23 15:03:22.534216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1857:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:09:27.200 passed 00:09:27.200 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:09:27.200 Test: dif_verify_split_test ...passed 00:09:27.200 Test: dif_verify_stream_multi_segments_test ...passed 00:09:27.200 Test: update_crc32c_pi_16_test ...passed 00:09:27.200 Test: update_crc32c_test ...passed 00:09:27.200 Test: dif_update_crc32c_split_test ...passed 00:09:27.200 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:09:27.200 Test: get_range_with_md_test ...passed 00:09:27.200 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:09:27.200 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:09:27.200 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:27.200 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:09:27.200 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:09:27.200 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:27.200 Test: dif_generate_and_verify_unmap_test ...passed 00:09:27.200 Test: dif_pi_format_check_test ...passed 00:09:27.200 Test: dif_type_check_test ...passed 00:09:27.200 00:09:27.200 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.200 suites 1 1 n/a 0 0 00:09:27.200 tests 86 86 86 0 0 00:09:27.200 asserts 3605 3605 3605 0 n/a 00:09:27.200 00:09:27.200 Elapsed time = 0.286 seconds 00:09:27.200 15:03:22 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:09:27.200 00:09:27.200 00:09:27.200 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.200 http://cunit.sourceforge.net/ 00:09:27.200 00:09:27.200 00:09:27.200 Suite: iov 00:09:27.200 Test: test_single_iov ...passed 00:09:27.200 Test: test_simple_iov ...passed 00:09:27.200 Test: test_complex_iov ...passed 00:09:27.200 Test: test_iovs_to_buf ...passed 00:09:27.200 Test: test_buf_to_iovs ...passed 00:09:27.200 Test: test_memset ...passed 00:09:27.200 Test: test_iov_one ...passed 00:09:27.200 Test: test_iov_xfer ...passed 00:09:27.200 00:09:27.200 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.200 suites 1 1 n/a 0 0 00:09:27.200 tests 8 8 8 0 0 00:09:27.200 asserts 156 156 156 0 n/a 00:09:27.200 00:09:27.200 Elapsed time = 0.000 seconds 00:09:27.200 15:03:22 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:09:27.458 00:09:27.458 00:09:27.458 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.458 http://cunit.sourceforge.net/ 00:09:27.458 00:09:27.458 00:09:27.458 Suite: math 00:09:27.458 Test: test_serial_number_arithmetic ...passed 00:09:27.458 Suite: erase 00:09:27.458 Test: test_memset_s ...passed 00:09:27.458 00:09:27.458 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.458 suites 2 2 n/a 0 0 00:09:27.458 tests 2 2 2 0 0 00:09:27.458 asserts 18 18 18 0 n/a 00:09:27.458 00:09:27.458 Elapsed time = 0.000 seconds 00:09:27.458 15:03:22 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:09:27.458 00:09:27.458 00:09:27.458 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.458 http://cunit.sourceforge.net/ 00:09:27.458 00:09:27.458 00:09:27.458 Suite: pipe 00:09:27.458 Test: test_create_destroy ...passed 00:09:27.458 Test: test_write_get_buffer ...passed 00:09:27.458 Test: test_write_advance ...passed 00:09:27.458 Test: test_read_get_buffer ...passed 00:09:27.458 Test: test_read_advance ...passed 00:09:27.458 Test: test_data ...passed 00:09:27.458 00:09:27.458 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.458 suites 1 1 n/a 0 0 00:09:27.458 tests 6 6 6 0 0 00:09:27.458 asserts 251 251 251 0 n/a 00:09:27.458 00:09:27.458 Elapsed time = 0.000 seconds 00:09:27.458 15:03:22 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:09:27.458 00:09:27.458 00:09:27.458 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.458 http://cunit.sourceforge.net/ 00:09:27.458 00:09:27.458 00:09:27.458 Suite: xor 00:09:27.458 Test: test_xor_gen ...passed 00:09:27.458 00:09:27.458 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.459 suites 1 1 n/a 0 0 00:09:27.459 tests 1 1 1 0 0 00:09:27.459 asserts 17 17 17 0 n/a 00:09:27.459 00:09:27.459 Elapsed time = 0.006 seconds 00:09:27.459 00:09:27.459 real 0m0.773s 00:09:27.459 user 0m0.497s 00:09:27.459 sys 0m0.282s 00:09:27.459 15:03:22 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.459 15:03:22 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:09:27.459 ************************************ 00:09:27.459 END TEST unittest_util 00:09:27.459 ************************************ 00:09:27.459 15:03:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:27.459 15:03:22 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:27.459 15:03:22 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:27.459 15:03:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:27.459 15:03:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.459 15:03:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:27.459 ************************************ 00:09:27.459 START TEST unittest_vhost 00:09:27.459 ************************************ 00:09:27.459 15:03:22 unittest.unittest_vhost -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:27.459 00:09:27.459 00:09:27.459 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.459 http://cunit.sourceforge.net/ 00:09:27.459 00:09:27.459 00:09:27.459 Suite: vhost_suite 00:09:27.459 Test: desc_to_iov_test ...[2024-07-23 15:03:22.829084] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:09:27.459 passed 00:09:27.459 Test: create_controller_test ...[2024-07-23 15:03:22.836880] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:27.459 [2024-07-23 15:03:22.837047] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:09:27.459 [2024-07-23 15:03:22.837240] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:27.459 [2024-07-23 15:03:22.837373] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:09:27.459 [2024-07-23 15:03:22.837451] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:09:27.459 [2024-07-23 15:03:22.838293] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:09:27.459 passed 00:09:27.459 Test: session_find_by_vid_test ...[2024-07-23 15:03:22.840284] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:09:27.459 passed 00:09:27.459 Test: remove_controller_test ...[2024-07-23 15:03:22.844381] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:09:27.459 passed 00:09:27.459 Test: vq_avail_ring_get_test ...passed 00:09:27.459 Test: vq_packed_ring_test ...passed 00:09:27.459 Test: vhost_blk_construct_test ...passed 00:09:27.459 00:09:27.459 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.459 suites 1 1 n/a 0 0 00:09:27.459 tests 7 7 7 0 0 00:09:27.459 asserts 147 147 147 0 n/a 00:09:27.459 00:09:27.459 Elapsed time = 0.023 seconds 00:09:27.459 00:09:27.459 real 0m0.076s 00:09:27.459 user 0m0.045s 00:09:27.459 sys 0m0.032s 00:09:27.459 ************************************ 00:09:27.459 END TEST unittest_vhost 00:09:27.459 ************************************ 00:09:27.459 15:03:22 unittest.unittest_vhost -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.459 15:03:22 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:09:27.717 15:03:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:27.717 15:03:22 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:27.717 15:03:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:27.717 15:03:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.717 15:03:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:27.717 ************************************ 00:09:27.717 START TEST unittest_dma 00:09:27.717 ************************************ 00:09:27.717 15:03:22 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:27.717 00:09:27.717 00:09:27.717 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.717 http://cunit.sourceforge.net/ 00:09:27.717 00:09:27.717 00:09:27.717 Suite: dma_suite 00:09:27.717 Test: test_dma ...passed 00:09:27.717 00:09:27.717 [2024-07-23 15:03:22.948105] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:09:27.717 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.717 suites 1 1 n/a 0 0 00:09:27.717 tests 1 1 1 0 0 00:09:27.717 asserts 54 54 54 0 n/a 00:09:27.717 00:09:27.717 Elapsed time = 0.000 seconds 00:09:27.717 00:09:27.717 real 0m0.033s 00:09:27.717 user 0m0.017s 00:09:27.717 sys 0m0.016s 00:09:27.717 15:03:22 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.717 15:03:22 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:09:27.717 ************************************ 00:09:27.717 END TEST unittest_dma 00:09:27.717 ************************************ 00:09:27.717 15:03:23 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:27.717 15:03:23 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:09:27.717 15:03:23 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:27.717 15:03:23 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.717 15:03:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:27.717 ************************************ 00:09:27.717 START TEST unittest_init 00:09:27.717 ************************************ 00:09:27.717 15:03:23 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:09:27.717 15:03:23 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:09:27.717 00:09:27.717 00:09:27.717 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.717 http://cunit.sourceforge.net/ 00:09:27.717 00:09:27.717 00:09:27.717 Suite: subsystem_suite 00:09:27.717 Test: subsystem_sort_test_depends_on_single ...passed 00:09:27.717 Test: subsystem_sort_test_depends_on_multiple ...passed 00:09:27.717 Test: subsystem_sort_test_missing_dependency ...[2024-07-23 15:03:23.039866] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:09:27.717 passed 00:09:27.717 00:09:27.717 [2024-07-23 15:03:23.040160] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:09:27.717 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.717 suites 1 1 n/a 0 0 00:09:27.717 tests 3 3 3 0 0 00:09:27.717 asserts 20 20 20 0 n/a 00:09:27.717 00:09:27.717 Elapsed time = 0.000 seconds 00:09:27.717 00:09:27.717 real 0m0.042s 00:09:27.717 user 0m0.021s 00:09:27.717 sys 0m0.021s 00:09:27.717 15:03:23 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.717 15:03:23 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:09:27.717 ************************************ 00:09:27.717 END TEST unittest_init 00:09:27.717 ************************************ 00:09:27.717 15:03:23 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:27.717 15:03:23 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:27.717 15:03:23 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:27.717 15:03:23 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.717 15:03:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:27.717 ************************************ 00:09:27.717 START TEST unittest_keyring 00:09:27.717 ************************************ 00:09:27.717 15:03:23 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:27.717 00:09:27.717 00:09:27.717 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.717 http://cunit.sourceforge.net/ 00:09:27.717 00:09:27.717 00:09:27.717 Suite: keyring 00:09:27.717 Test: test_keyring_add_remove ...[2024-07-23 15:03:23.130363] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:09:27.718 [2024-07-23 15:03:23.130711] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:09:27.718 [2024-07-23 15:03:23.130839] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:09:27.718 passed 00:09:27.718 Test: test_keyring_get_put ...passed 00:09:27.718 00:09:27.718 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.718 suites 1 1 n/a 0 0 00:09:27.718 tests 2 2 2 0 0 00:09:27.718 asserts 44 44 44 0 n/a 00:09:27.718 00:09:27.718 Elapsed time = 0.001 seconds 00:09:27.976 00:09:27.976 real 0m0.040s 00:09:27.976 user 0m0.017s 00:09:27.976 sys 0m0.023s 00:09:27.976 15:03:23 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.976 15:03:23 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:09:27.976 ************************************ 00:09:27.976 END TEST unittest_keyring 00:09:27.976 ************************************ 00:09:27.976 15:03:23 unittest -- common/autotest_common.sh@1142 -- # return 0 00:09:27.976 15:03:23 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:09:27.976 15:03:23 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:09:27.976 15:03:23 unittest -- unit/unittest.sh@293 -- # hostname 00:09:27.976 15:03:23 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:28.234 geninfo: WARNING: invalid characters removed from testname! 00:10:06.964 15:03:58 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:10:07.532 15:04:02 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:10.884 15:04:05 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:13.435 15:04:08 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:15.964 15:04:10 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:17.867 15:04:13 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:20.396 15:04:15 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:22.925 15:04:17 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:10:22.925 15:04:17 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:23.183 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:23.183 Found 326 entries. 00:10:23.183 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:10:23.183 Writing .css and .png files. 00:10:23.183 Generating output. 00:10:23.183 Processing file include/linux/virtio_ring.h 00:10:23.749 Processing file include/spdk/base64.h 00:10:23.749 Processing file include/spdk/endian.h 00:10:23.749 Processing file include/spdk/histogram_data.h 00:10:23.749 Processing file include/spdk/nvme_spec.h 00:10:23.749 Processing file include/spdk/nvme.h 00:10:23.749 Processing file include/spdk/bdev_module.h 00:10:23.749 Processing file include/spdk/util.h 00:10:23.749 Processing file include/spdk/nvmf_transport.h 00:10:23.749 Processing file include/spdk/trace.h 00:10:23.749 Processing file include/spdk/mmio.h 00:10:23.749 Processing file include/spdk/thread.h 00:10:23.749 Processing file include/spdk_internal/virtio.h 00:10:23.749 Processing file include/spdk_internal/rdma_utils.h 00:10:23.749 Processing file include/spdk_internal/utf.h 00:10:23.749 Processing file include/spdk_internal/nvme_tcp.h 00:10:23.749 Processing file include/spdk_internal/sock.h 00:10:23.749 Processing file include/spdk_internal/sgl.h 00:10:23.749 Processing file lib/accel/accel.c 00:10:23.749 Processing file lib/accel/accel_rpc.c 00:10:23.749 Processing file lib/accel/accel_sw.c 00:10:24.007 Processing file lib/bdev/bdev_rpc.c 00:10:24.007 Processing file lib/bdev/scsi_nvme.c 00:10:24.007 Processing file lib/bdev/bdev.c 00:10:24.007 Processing file lib/bdev/part.c 00:10:24.007 Processing file lib/bdev/bdev_zone.c 00:10:24.266 Processing file lib/blob/blobstore.h 00:10:24.266 Processing file lib/blob/blobstore.c 00:10:24.266 Processing file lib/blob/zeroes.c 00:10:24.266 Processing file lib/blob/blob_bs_dev.c 00:10:24.266 Processing file lib/blob/request.c 00:10:24.266 Processing file lib/blobfs/blobfs.c 00:10:24.266 Processing file lib/blobfs/tree.c 00:10:24.523 Processing file lib/conf/conf.c 00:10:24.523 Processing file lib/dma/dma.c 00:10:24.782 Processing file lib/env_dpdk/pci_dpdk.c 00:10:24.782 Processing file lib/env_dpdk/pci_ioat.c 00:10:24.782 Processing file lib/env_dpdk/memory.c 00:10:24.782 Processing file lib/env_dpdk/pci_vmd.c 00:10:24.782 Processing file lib/env_dpdk/threads.c 00:10:24.782 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:10:24.782 Processing file lib/env_dpdk/sigbus_handler.c 00:10:24.782 Processing file lib/env_dpdk/pci_idxd.c 00:10:24.782 Processing file lib/env_dpdk/pci_virtio.c 00:10:24.782 Processing file lib/env_dpdk/pci.c 00:10:24.782 Processing file lib/env_dpdk/env.c 00:10:24.782 Processing file lib/env_dpdk/pci_event.c 00:10:24.782 Processing file lib/env_dpdk/init.c 00:10:24.782 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:10:24.782 Processing file lib/event/log_rpc.c 00:10:24.782 Processing file lib/event/reactor.c 00:10:24.782 Processing file lib/event/scheduler_static.c 00:10:24.782 Processing file lib/event/app_rpc.c 00:10:24.782 Processing file lib/event/app.c 00:10:25.346 Processing file lib/ftl/ftl_nv_cache.c 00:10:25.346 Processing file lib/ftl/ftl_l2p_cache.c 00:10:25.346 Processing file lib/ftl/ftl_nv_cache_io.h 00:10:25.346 Processing file lib/ftl/ftl_writer.h 00:10:25.346 Processing file lib/ftl/ftl_io.c 00:10:25.346 Processing file lib/ftl/ftl_rq.c 00:10:25.346 Processing file lib/ftl/ftl_l2p_flat.c 00:10:25.346 Processing file lib/ftl/ftl_band_ops.c 00:10:25.346 Processing file lib/ftl/ftl_core.h 00:10:25.346 Processing file lib/ftl/ftl_reloc.c 00:10:25.346 Processing file lib/ftl/ftl_core.c 00:10:25.346 Processing file lib/ftl/ftl_p2l.c 00:10:25.346 Processing file lib/ftl/ftl_debug.c 00:10:25.346 Processing file lib/ftl/ftl_sb.c 00:10:25.346 Processing file lib/ftl/ftl_l2p.c 00:10:25.346 Processing file lib/ftl/ftl_trace.c 00:10:25.346 Processing file lib/ftl/ftl_layout.c 00:10:25.346 Processing file lib/ftl/ftl_nv_cache.h 00:10:25.346 Processing file lib/ftl/ftl_band.c 00:10:25.346 Processing file lib/ftl/ftl_writer.c 00:10:25.346 Processing file lib/ftl/ftl_debug.h 00:10:25.346 Processing file lib/ftl/ftl_init.c 00:10:25.346 Processing file lib/ftl/ftl_band.h 00:10:25.346 Processing file lib/ftl/ftl_io.h 00:10:25.346 Processing file lib/ftl/base/ftl_base_bdev.c 00:10:25.346 Processing file lib/ftl/base/ftl_base_dev.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:10:25.604 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:10:25.604 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:10:25.604 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:10:25.604 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:10:25.604 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:10:25.604 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:10:25.604 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:10:25.604 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:10:25.604 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:10:25.604 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:10:25.604 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:10:25.862 Processing file lib/ftl/utils/ftl_property.c 00:10:25.862 Processing file lib/ftl/utils/ftl_property.h 00:10:25.862 Processing file lib/ftl/utils/ftl_df.h 00:10:25.862 Processing file lib/ftl/utils/ftl_bitmap.c 00:10:25.862 Processing file lib/ftl/utils/ftl_mempool.c 00:10:25.862 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:10:25.862 Processing file lib/ftl/utils/ftl_md.c 00:10:25.862 Processing file lib/ftl/utils/ftl_conf.c 00:10:25.862 Processing file lib/ftl/utils/ftl_addr_utils.h 00:10:25.862 Processing file lib/idxd/idxd_internal.h 00:10:25.862 Processing file lib/idxd/idxd_kernel.c 00:10:25.862 Processing file lib/idxd/idxd_user.c 00:10:25.862 Processing file lib/idxd/idxd.c 00:10:26.120 Processing file lib/init/json_config.c 00:10:26.120 Processing file lib/init/subsystem.c 00:10:26.120 Processing file lib/init/subsystem_rpc.c 00:10:26.120 Processing file lib/init/rpc.c 00:10:26.120 Processing file lib/ioat/ioat_internal.h 00:10:26.120 Processing file lib/ioat/ioat.c 00:10:26.377 Processing file lib/iscsi/iscsi.h 00:10:26.377 Processing file lib/iscsi/task.h 00:10:26.377 Processing file lib/iscsi/portal_grp.c 00:10:26.377 Processing file lib/iscsi/init_grp.c 00:10:26.377 Processing file lib/iscsi/tgt_node.c 00:10:26.377 Processing file lib/iscsi/conn.c 00:10:26.377 Processing file lib/iscsi/task.c 00:10:26.377 Processing file lib/iscsi/iscsi.c 00:10:26.377 Processing file lib/iscsi/iscsi_rpc.c 00:10:26.377 Processing file lib/iscsi/iscsi_subsystem.c 00:10:26.377 Processing file lib/iscsi/md5.c 00:10:26.377 Processing file lib/iscsi/param.c 00:10:26.377 Processing file lib/json/json_parse.c 00:10:26.377 Processing file lib/json/json_util.c 00:10:26.377 Processing file lib/json/json_write.c 00:10:26.635 Processing file lib/jsonrpc/jsonrpc_server.c 00:10:26.635 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:10:26.635 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:10:26.635 Processing file lib/jsonrpc/jsonrpc_client.c 00:10:26.635 Processing file lib/keyring/keyring_rpc.c 00:10:26.635 Processing file lib/keyring/keyring.c 00:10:26.635 Processing file lib/log/log_deprecated.c 00:10:26.635 Processing file lib/log/log.c 00:10:26.635 Processing file lib/log/log_flags.c 00:10:26.893 Processing file lib/lvol/lvol.c 00:10:26.893 Processing file lib/nbd/nbd.c 00:10:26.893 Processing file lib/nbd/nbd_rpc.c 00:10:26.893 Processing file lib/notify/notify_rpc.c 00:10:26.893 Processing file lib/notify/notify.c 00:10:27.459 Processing file lib/nvme/nvme_ctrlr.c 00:10:27.459 Processing file lib/nvme/nvme_internal.h 00:10:27.459 Processing file lib/nvme/nvme_pcie.c 00:10:27.459 Processing file lib/nvme/nvme_transport.c 00:10:27.459 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:10:27.459 Processing file lib/nvme/nvme_ns.c 00:10:27.459 Processing file lib/nvme/nvme_zns.c 00:10:27.460 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:10:27.460 Processing file lib/nvme/nvme_rdma.c 00:10:27.460 Processing file lib/nvme/nvme_io_msg.c 00:10:27.460 Processing file lib/nvme/nvme_tcp.c 00:10:27.460 Processing file lib/nvme/nvme_cuse.c 00:10:27.460 Processing file lib/nvme/nvme_discovery.c 00:10:27.460 Processing file lib/nvme/nvme_poll_group.c 00:10:27.460 Processing file lib/nvme/nvme_pcie_internal.h 00:10:27.460 Processing file lib/nvme/nvme_auth.c 00:10:27.460 Processing file lib/nvme/nvme_qpair.c 00:10:27.460 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:10:27.460 Processing file lib/nvme/nvme_opal.c 00:10:27.460 Processing file lib/nvme/nvme.c 00:10:27.460 Processing file lib/nvme/nvme_pcie_common.c 00:10:27.460 Processing file lib/nvme/nvme_quirks.c 00:10:27.460 Processing file lib/nvme/nvme_ns_cmd.c 00:10:27.460 Processing file lib/nvme/nvme_fabric.c 00:10:28.026 Processing file lib/nvmf/auth.c 00:10:28.026 Processing file lib/nvmf/nvmf.c 00:10:28.026 Processing file lib/nvmf/nvmf_rpc.c 00:10:28.026 Processing file lib/nvmf/nvmf_internal.h 00:10:28.026 Processing file lib/nvmf/ctrlr_bdev.c 00:10:28.026 Processing file lib/nvmf/tcp.c 00:10:28.026 Processing file lib/nvmf/ctrlr.c 00:10:28.026 Processing file lib/nvmf/transport.c 00:10:28.026 Processing file lib/nvmf/rdma.c 00:10:28.026 Processing file lib/nvmf/ctrlr_discovery.c 00:10:28.026 Processing file lib/nvmf/subsystem.c 00:10:28.026 Processing file lib/rdma_provider/common.c 00:10:28.026 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:10:28.283 Processing file lib/rdma_utils/rdma_utils.c 00:10:28.283 Processing file lib/rpc/rpc.c 00:10:28.283 Processing file lib/scsi/scsi_rpc.c 00:10:28.283 Processing file lib/scsi/lun.c 00:10:28.283 Processing file lib/scsi/scsi.c 00:10:28.283 Processing file lib/scsi/scsi_pr.c 00:10:28.283 Processing file lib/scsi/task.c 00:10:28.283 Processing file lib/scsi/port.c 00:10:28.283 Processing file lib/scsi/dev.c 00:10:28.283 Processing file lib/scsi/scsi_bdev.c 00:10:28.540 Processing file lib/sock/sock.c 00:10:28.540 Processing file lib/sock/sock_rpc.c 00:10:28.540 Processing file lib/thread/thread.c 00:10:28.540 Processing file lib/thread/iobuf.c 00:10:28.540 Processing file lib/trace/trace_rpc.c 00:10:28.540 Processing file lib/trace/trace.c 00:10:28.540 Processing file lib/trace/trace_flags.c 00:10:28.798 Processing file lib/trace_parser/trace.cpp 00:10:28.798 Processing file lib/ublk/ublk_rpc.c 00:10:28.798 Processing file lib/ublk/ublk.c 00:10:28.799 Processing file lib/ut/ut.c 00:10:28.799 Processing file lib/ut_mock/mock.c 00:10:29.363 Processing file lib/util/fd.c 00:10:29.363 Processing file lib/util/strerror_tls.c 00:10:29.363 Processing file lib/util/iov.c 00:10:29.363 Processing file lib/util/uuid.c 00:10:29.363 Processing file lib/util/crc16.c 00:10:29.363 Processing file lib/util/math.c 00:10:29.363 Processing file lib/util/zipf.c 00:10:29.363 Processing file lib/util/net.c 00:10:29.363 Processing file lib/util/crc64.c 00:10:29.363 Processing file lib/util/file.c 00:10:29.363 Processing file lib/util/cpuset.c 00:10:29.363 Processing file lib/util/hexlify.c 00:10:29.363 Processing file lib/util/base64.c 00:10:29.363 Processing file lib/util/bit_array.c 00:10:29.363 Processing file lib/util/string.c 00:10:29.363 Processing file lib/util/crc32_ieee.c 00:10:29.363 Processing file lib/util/crc32c.c 00:10:29.363 Processing file lib/util/fd_group.c 00:10:29.363 Processing file lib/util/pipe.c 00:10:29.363 Processing file lib/util/dif.c 00:10:29.363 Processing file lib/util/crc32.c 00:10:29.363 Processing file lib/util/xor.c 00:10:29.363 Processing file lib/vfio_user/host/vfio_user_pci.c 00:10:29.363 Processing file lib/vfio_user/host/vfio_user.c 00:10:29.622 Processing file lib/vhost/rte_vhost_user.c 00:10:29.622 Processing file lib/vhost/vhost_internal.h 00:10:29.622 Processing file lib/vhost/vhost_scsi.c 00:10:29.622 Processing file lib/vhost/vhost_rpc.c 00:10:29.622 Processing file lib/vhost/vhost_blk.c 00:10:29.622 Processing file lib/vhost/vhost.c 00:10:29.622 Processing file lib/virtio/virtio_vfio_user.c 00:10:29.622 Processing file lib/virtio/virtio_vhost_user.c 00:10:29.622 Processing file lib/virtio/virtio.c 00:10:29.622 Processing file lib/virtio/virtio_pci.c 00:10:29.879 Processing file lib/vmd/vmd.c 00:10:29.879 Processing file lib/vmd/led.c 00:10:29.879 Processing file module/accel/dsa/accel_dsa_rpc.c 00:10:29.879 Processing file module/accel/dsa/accel_dsa.c 00:10:29.879 Processing file module/accel/error/accel_error_rpc.c 00:10:29.879 Processing file module/accel/error/accel_error.c 00:10:29.879 Processing file module/accel/iaa/accel_iaa_rpc.c 00:10:29.879 Processing file module/accel/iaa/accel_iaa.c 00:10:30.138 Processing file module/accel/ioat/accel_ioat_rpc.c 00:10:30.138 Processing file module/accel/ioat/accel_ioat.c 00:10:30.138 Processing file module/bdev/aio/bdev_aio_rpc.c 00:10:30.138 Processing file module/bdev/aio/bdev_aio.c 00:10:30.138 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:10:30.138 Processing file module/bdev/delay/vbdev_delay.c 00:10:30.138 Processing file module/bdev/error/vbdev_error_rpc.c 00:10:30.138 Processing file module/bdev/error/vbdev_error.c 00:10:30.396 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:10:30.396 Processing file module/bdev/ftl/bdev_ftl.c 00:10:30.396 Processing file module/bdev/gpt/gpt.h 00:10:30.396 Processing file module/bdev/gpt/gpt.c 00:10:30.396 Processing file module/bdev/gpt/vbdev_gpt.c 00:10:30.396 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:10:30.396 Processing file module/bdev/iscsi/bdev_iscsi.c 00:10:30.656 Processing file module/bdev/lvol/vbdev_lvol.c 00:10:30.656 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:10:30.656 Processing file module/bdev/malloc/bdev_malloc.c 00:10:30.656 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:10:30.656 Processing file module/bdev/null/bdev_null_rpc.c 00:10:30.656 Processing file module/bdev/null/bdev_null.c 00:10:30.934 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:10:30.934 Processing file module/bdev/nvme/bdev_mdns_client.c 00:10:30.934 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:10:30.934 Processing file module/bdev/nvme/bdev_nvme.c 00:10:30.934 Processing file module/bdev/nvme/vbdev_opal.c 00:10:30.934 Processing file module/bdev/nvme/nvme_rpc.c 00:10:30.934 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:10:30.934 Processing file module/bdev/passthru/vbdev_passthru.c 00:10:30.934 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:10:31.193 Processing file module/bdev/raid/raid5f.c 00:10:31.193 Processing file module/bdev/raid/bdev_raid_sb.c 00:10:31.193 Processing file module/bdev/raid/raid0.c 00:10:31.193 Processing file module/bdev/raid/bdev_raid_rpc.c 00:10:31.193 Processing file module/bdev/raid/concat.c 00:10:31.193 Processing file module/bdev/raid/bdev_raid.h 00:10:31.193 Processing file module/bdev/raid/raid1.c 00:10:31.193 Processing file module/bdev/raid/bdev_raid.c 00:10:31.193 Processing file module/bdev/split/vbdev_split.c 00:10:31.193 Processing file module/bdev/split/vbdev_split_rpc.c 00:10:31.452 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:10:31.452 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:10:31.452 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:10:31.452 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:10:31.452 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:10:31.452 Processing file module/blob/bdev/blob_bdev.c 00:10:31.452 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:10:31.452 Processing file module/blobfs/bdev/blobfs_bdev.c 00:10:31.710 Processing file module/env_dpdk/env_dpdk_rpc.c 00:10:31.710 Processing file module/event/subsystems/accel/accel.c 00:10:31.710 Processing file module/event/subsystems/bdev/bdev.c 00:10:31.710 Processing file module/event/subsystems/iobuf/iobuf.c 00:10:31.710 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:10:31.710 Processing file module/event/subsystems/iscsi/iscsi.c 00:10:31.710 Processing file module/event/subsystems/keyring/keyring.c 00:10:31.968 Processing file module/event/subsystems/nbd/nbd.c 00:10:31.968 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:10:31.968 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:10:31.968 Processing file module/event/subsystems/scheduler/scheduler.c 00:10:31.968 Processing file module/event/subsystems/scsi/scsi.c 00:10:31.968 Processing file module/event/subsystems/sock/sock.c 00:10:31.968 Processing file module/event/subsystems/ublk/ublk.c 00:10:31.968 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:10:32.225 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:10:32.225 Processing file module/event/subsystems/vmd/vmd.c 00:10:32.225 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:10:32.225 Processing file module/keyring/file/keyring_rpc.c 00:10:32.225 Processing file module/keyring/file/keyring.c 00:10:32.225 Processing file module/keyring/linux/keyring.c 00:10:32.225 Processing file module/keyring/linux/keyring_rpc.c 00:10:32.225 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:10:32.483 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:10:32.483 Processing file module/scheduler/gscheduler/gscheduler.c 00:10:32.483 Processing file module/sock/posix/posix.c 00:10:32.483 Writing directory view page. 00:10:32.483 Overall coverage rate: 00:10:32.483 lines......: 38.2% (41086 of 107433 lines) 00:10:32.483 functions..: 41.8% (3741 of 8940 functions) 00:10:32.483 00:10:32.483 00:10:32.483 ===================== 00:10:32.483 All unit tests passed 00:10:32.483 ===================== 00:10:32.483 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:32.483 15:04:27 unittest -- unit/unittest.sh@305 -- # set +x 00:10:32.483 00:10:32.483 00:10:32.483 00:10:32.483 real 4m1.150s 00:10:32.483 user 3m30.549s 00:10:32.483 sys 0m22.845s 00:10:32.483 15:04:27 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.483 15:04:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:10:32.483 ************************************ 00:10:32.483 END TEST unittest 00:10:32.483 ************************************ 00:10:32.483 15:04:27 -- common/autotest_common.sh@1142 -- # return 0 00:10:32.483 15:04:27 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:10:32.483 15:04:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:32.483 15:04:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:32.483 15:04:27 -- spdk/autotest.sh@162 -- # timing_enter lib 00:10:32.483 15:04:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.483 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:10:32.483 15:04:27 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:10:32.483 15:04:27 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:32.483 15:04:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:32.483 15:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.483 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 ************************************ 00:10:32.741 START TEST env 00:10:32.741 ************************************ 00:10:32.741 15:04:27 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:32.741 * Looking for test storage... 00:10:32.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:32.741 15:04:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:32.741 15:04:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:32.741 15:04:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.741 15:04:28 env -- common/autotest_common.sh@10 -- # set +x 00:10:32.741 ************************************ 00:10:32.741 START TEST env_memory 00:10:32.741 ************************************ 00:10:32.741 15:04:28 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:32.741 00:10:32.741 00:10:32.741 CUnit - A unit testing framework for C - Version 2.1-3 00:10:32.741 http://cunit.sourceforge.net/ 00:10:32.741 00:10:32.741 00:10:32.741 Suite: memory 00:10:32.741 Test: alloc and free memory map ...[2024-07-23 15:04:28.109870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:32.998 passed 00:10:32.999 Test: mem map translation ...[2024-07-23 15:04:28.185845] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:32.999 [2024-07-23 15:04:28.186156] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:32.999 [2024-07-23 15:04:28.186501] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:32.999 [2024-07-23 15:04:28.186667] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:32.999 passed 00:10:32.999 Test: mem map registration ...[2024-07-23 15:04:28.298495] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:10:32.999 [2024-07-23 15:04:28.298842] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:10:32.999 passed 00:10:32.999 Test: mem map adjacent registrations ...passed 00:10:32.999 00:10:32.999 Run Summary: Type Total Ran Passed Failed Inactive 00:10:32.999 suites 1 1 n/a 0 0 00:10:32.999 tests 4 4 4 0 0 00:10:32.999 asserts 152 152 152 0 n/a 00:10:32.999 00:10:33.256 Elapsed time = 0.372 seconds 00:10:33.256 00:10:33.256 real 0m0.415s 00:10:33.256 user 0m0.380s 00:10:33.256 sys 0m0.032s 00:10:33.256 15:04:28 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.256 ************************************ 00:10:33.256 END TEST env_memory 00:10:33.256 ************************************ 00:10:33.256 15:04:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:33.256 15:04:28 env -- common/autotest_common.sh@1142 -- # return 0 00:10:33.256 15:04:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:33.256 15:04:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:33.256 15:04:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.256 15:04:28 env -- common/autotest_common.sh@10 -- # set +x 00:10:33.256 ************************************ 00:10:33.256 START TEST env_vtophys 00:10:33.256 ************************************ 00:10:33.256 15:04:28 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:33.256 EAL: lib.eal log level changed from notice to debug 00:10:33.256 EAL: Detected lcore 0 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 1 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 2 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 3 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 4 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 5 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 6 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 7 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 8 as core 0 on socket 0 00:10:33.256 EAL: Detected lcore 9 as core 0 on socket 0 00:10:33.256 EAL: Maximum logical cores by configuration: 128 00:10:33.256 EAL: Detected CPU lcores: 10 00:10:33.256 EAL: Detected NUMA nodes: 1 00:10:33.256 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:10:33.256 EAL: Checking presence of .so 'librte_eal.so.23' 00:10:33.256 EAL: Checking presence of .so 'librte_eal.so' 00:10:33.256 EAL: Detected static linkage of DPDK 00:10:33.256 EAL: No shared files mode enabled, IPC will be disabled 00:10:33.256 EAL: Selected IOVA mode 'PA' 00:10:33.256 EAL: Probing VFIO support... 00:10:33.256 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:33.256 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:33.256 EAL: Ask a virtual area of 0x2e000 bytes 00:10:33.256 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:33.256 EAL: Setting up physically contiguous memory... 00:10:33.256 EAL: Setting maximum number of open files to 1048576 00:10:33.256 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:33.256 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:33.256 EAL: Ask a virtual area of 0x61000 bytes 00:10:33.256 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:33.256 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:33.256 EAL: Ask a virtual area of 0x400000000 bytes 00:10:33.257 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:33.257 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:33.257 EAL: Ask a virtual area of 0x61000 bytes 00:10:33.257 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:33.257 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:33.257 EAL: Ask a virtual area of 0x400000000 bytes 00:10:33.257 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:33.257 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:33.257 EAL: Ask a virtual area of 0x61000 bytes 00:10:33.257 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:33.257 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:33.257 EAL: Ask a virtual area of 0x400000000 bytes 00:10:33.257 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:33.257 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:33.257 EAL: Ask a virtual area of 0x61000 bytes 00:10:33.257 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:33.257 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:33.257 EAL: Ask a virtual area of 0x400000000 bytes 00:10:33.257 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:33.257 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:33.257 EAL: Hugepages will be freed exactly as allocated. 00:10:33.257 EAL: No shared files mode enabled, IPC is disabled 00:10:33.257 EAL: No shared files mode enabled, IPC is disabled 00:10:33.257 EAL: TSC frequency is ~2100000 KHz 00:10:33.257 EAL: Main lcore 0 is ready (tid=72da0ed16a80;cpuset=[0]) 00:10:33.257 EAL: Trying to obtain current memory policy. 00:10:33.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.257 EAL: Restoring previous memory policy: 0 00:10:33.257 EAL: request: mp_malloc_sync 00:10:33.257 EAL: No shared files mode enabled, IPC is disabled 00:10:33.257 EAL: Heap on socket 0 was expanded by 2MB 00:10:33.257 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:33.257 EAL: Mem event callback 'spdk:(nil)' registered 00:10:33.257 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:33.514 00:10:33.514 00:10:33.514 CUnit - A unit testing framework for C - Version 2.1-3 00:10:33.514 http://cunit.sourceforge.net/ 00:10:33.514 00:10:33.514 00:10:33.514 Suite: components_suite 00:10:33.514 Test: vtophys_malloc_test ...passed 00:10:33.514 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:33.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.514 EAL: Restoring previous memory policy: 4 00:10:33.514 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.514 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was expanded by 4MB 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was shrunk by 4MB 00:10:33.515 EAL: Trying to obtain current memory policy. 00:10:33.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.515 EAL: Restoring previous memory policy: 4 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was expanded by 6MB 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was shrunk by 6MB 00:10:33.515 EAL: Trying to obtain current memory policy. 00:10:33.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.515 EAL: Restoring previous memory policy: 4 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was expanded by 10MB 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was shrunk by 10MB 00:10:33.515 EAL: Trying to obtain current memory policy. 00:10:33.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.515 EAL: Restoring previous memory policy: 4 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was expanded by 18MB 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was shrunk by 18MB 00:10:33.515 EAL: Trying to obtain current memory policy. 00:10:33.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.515 EAL: Restoring previous memory policy: 4 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was expanded by 34MB 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was shrunk by 34MB 00:10:33.515 EAL: Trying to obtain current memory policy. 00:10:33.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.515 EAL: Restoring previous memory policy: 4 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was expanded by 66MB 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was shrunk by 66MB 00:10:33.515 EAL: Trying to obtain current memory policy. 00:10:33.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.515 EAL: Restoring previous memory policy: 4 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.515 EAL: request: mp_malloc_sync 00:10:33.515 EAL: No shared files mode enabled, IPC is disabled 00:10:33.515 EAL: Heap on socket 0 was expanded by 130MB 00:10:33.515 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.772 EAL: request: mp_malloc_sync 00:10:33.772 EAL: No shared files mode enabled, IPC is disabled 00:10:33.772 EAL: Heap on socket 0 was shrunk by 130MB 00:10:33.772 EAL: Trying to obtain current memory policy. 00:10:33.772 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:33.772 EAL: Restoring previous memory policy: 4 00:10:33.772 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.772 EAL: request: mp_malloc_sync 00:10:33.772 EAL: No shared files mode enabled, IPC is disabled 00:10:33.772 EAL: Heap on socket 0 was expanded by 258MB 00:10:33.772 EAL: Calling mem event callback 'spdk:(nil)' 00:10:33.772 EAL: request: mp_malloc_sync 00:10:33.772 EAL: No shared files mode enabled, IPC is disabled 00:10:33.772 EAL: Heap on socket 0 was shrunk by 258MB 00:10:33.772 EAL: Trying to obtain current memory policy. 00:10:33.772 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:34.029 EAL: Restoring previous memory policy: 4 00:10:34.029 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.029 EAL: request: mp_malloc_sync 00:10:34.029 EAL: No shared files mode enabled, IPC is disabled 00:10:34.029 EAL: Heap on socket 0 was expanded by 514MB 00:10:34.029 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.029 EAL: request: mp_malloc_sync 00:10:34.029 EAL: No shared files mode enabled, IPC is disabled 00:10:34.029 EAL: Heap on socket 0 was shrunk by 514MB 00:10:34.029 EAL: Trying to obtain current memory policy. 00:10:34.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:34.287 EAL: Restoring previous memory policy: 4 00:10:34.287 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.287 EAL: request: mp_malloc_sync 00:10:34.287 EAL: No shared files mode enabled, IPC is disabled 00:10:34.287 EAL: Heap on socket 0 was expanded by 1026MB 00:10:34.544 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.801 EAL: request: mp_malloc_sync 00:10:34.801 EAL: No shared files mode enabled, IPC is disabled 00:10:34.801 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:34.801 passed 00:10:34.801 00:10:34.801 Run Summary: Type Total Ran Passed Failed Inactive 00:10:34.801 suites 1 1 n/a 0 0 00:10:34.801 tests 2 2 2 0 0 00:10:34.801 asserts 5449 5449 5449 0 n/a 00:10:34.801 00:10:34.801 Elapsed time = 1.308 seconds 00:10:34.801 EAL: Calling mem event callback 'spdk:(nil)' 00:10:34.801 EAL: request: mp_malloc_sync 00:10:34.801 EAL: No shared files mode enabled, IPC is disabled 00:10:34.801 EAL: Heap on socket 0 was shrunk by 2MB 00:10:34.801 EAL: No shared files mode enabled, IPC is disabled 00:10:34.801 EAL: No shared files mode enabled, IPC is disabled 00:10:34.801 EAL: No shared files mode enabled, IPC is disabled 00:10:34.801 00:10:34.801 real 0m1.549s 00:10:34.801 user 0m0.804s 00:10:34.801 sys 0m0.625s 00:10:34.801 15:04:30 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.801 15:04:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:34.801 ************************************ 00:10:34.801 END TEST env_vtophys 00:10:34.801 ************************************ 00:10:34.801 15:04:30 env -- common/autotest_common.sh@1142 -- # return 0 00:10:34.801 15:04:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:34.801 15:04:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:34.801 15:04:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.801 15:04:30 env -- common/autotest_common.sh@10 -- # set +x 00:10:34.801 ************************************ 00:10:34.801 START TEST env_pci 00:10:34.801 ************************************ 00:10:34.801 15:04:30 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:34.801 00:10:34.801 00:10:34.801 CUnit - A unit testing framework for C - Version 2.1-3 00:10:34.801 http://cunit.sourceforge.net/ 00:10:34.801 00:10:34.801 00:10:34.801 Suite: pci 00:10:34.801 Test: pci_hook ...[2024-07-23 15:04:30.120064] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 80378 has claimed it 00:10:34.801 passed 00:10:34.801 00:10:34.801 EAL: Cannot find device (10000:00:01.0) 00:10:34.801 EAL: Failed to attach device on primary process 00:10:34.801 Run Summary: Type Total Ran Passed Failed Inactive 00:10:34.801 suites 1 1 n/a 0 0 00:10:34.801 tests 1 1 1 0 0 00:10:34.801 asserts 25 25 25 0 n/a 00:10:34.801 00:10:34.801 Elapsed time = 0.007 seconds 00:10:34.801 00:10:34.802 real 0m0.071s 00:10:34.802 user 0m0.026s 00:10:34.802 sys 0m0.045s 00:10:34.802 15:04:30 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.802 15:04:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:34.802 ************************************ 00:10:34.802 END TEST env_pci 00:10:34.802 ************************************ 00:10:34.802 15:04:30 env -- common/autotest_common.sh@1142 -- # return 0 00:10:34.802 15:04:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:34.802 15:04:30 env -- env/env.sh@15 -- # uname 00:10:34.802 15:04:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:34.802 15:04:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:34.802 15:04:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:34.802 15:04:30 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:34.802 15:04:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.802 15:04:30 env -- common/autotest_common.sh@10 -- # set +x 00:10:34.802 ************************************ 00:10:34.802 START TEST env_dpdk_post_init 00:10:34.802 ************************************ 00:10:34.802 15:04:30 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:35.059 EAL: Detected CPU lcores: 10 00:10:35.059 EAL: Detected NUMA nodes: 1 00:10:35.059 EAL: Detected static linkage of DPDK 00:10:35.059 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:35.059 EAL: Selected IOVA mode 'PA' 00:10:35.059 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:35.059 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:35.059 Starting DPDK initialization... 00:10:35.059 Starting SPDK post initialization... 00:10:35.059 SPDK NVMe probe 00:10:35.059 Attaching to 0000:00:10.0 00:10:35.059 Attached to 0000:00:10.0 00:10:35.059 Cleaning up... 00:10:35.059 00:10:35.059 real 0m0.222s 00:10:35.059 user 0m0.054s 00:10:35.059 sys 0m0.069s 00:10:35.059 15:04:30 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.059 15:04:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:35.059 ************************************ 00:10:35.059 END TEST env_dpdk_post_init 00:10:35.059 ************************************ 00:10:35.059 15:04:30 env -- common/autotest_common.sh@1142 -- # return 0 00:10:35.059 15:04:30 env -- env/env.sh@26 -- # uname 00:10:35.317 15:04:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:35.317 15:04:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:35.317 15:04:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:35.317 15:04:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.317 15:04:30 env -- common/autotest_common.sh@10 -- # set +x 00:10:35.317 ************************************ 00:10:35.317 START TEST env_mem_callbacks 00:10:35.317 ************************************ 00:10:35.317 15:04:30 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:35.317 EAL: Detected CPU lcores: 10 00:10:35.317 EAL: Detected NUMA nodes: 1 00:10:35.317 EAL: Detected static linkage of DPDK 00:10:35.317 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:35.317 EAL: Selected IOVA mode 'PA' 00:10:35.317 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:35.317 00:10:35.317 00:10:35.317 CUnit - A unit testing framework for C - Version 2.1-3 00:10:35.317 http://cunit.sourceforge.net/ 00:10:35.317 00:10:35.317 00:10:35.317 Suite: memory 00:10:35.317 Test: test ... 00:10:35.317 register 0x200000200000 2097152 00:10:35.317 malloc 3145728 00:10:35.317 register 0x200000400000 4194304 00:10:35.317 buf 0x200000500000 len 3145728 PASSED 00:10:35.317 malloc 64 00:10:35.317 buf 0x2000004fff40 len 64 PASSED 00:10:35.317 malloc 4194304 00:10:35.317 register 0x200000800000 6291456 00:10:35.317 buf 0x200000a00000 len 4194304 PASSED 00:10:35.317 free 0x200000500000 3145728 00:10:35.317 free 0x2000004fff40 64 00:10:35.317 unregister 0x200000400000 4194304 PASSED 00:10:35.317 free 0x200000a00000 4194304 00:10:35.317 unregister 0x200000800000 6291456 PASSED 00:10:35.317 malloc 8388608 00:10:35.317 register 0x200000400000 10485760 00:10:35.317 buf 0x200000600000 len 8388608 PASSED 00:10:35.317 free 0x200000600000 8388608 00:10:35.317 unregister 0x200000400000 10485760 PASSED 00:10:35.317 passed 00:10:35.317 00:10:35.317 Run Summary: Type Total Ran Passed Failed Inactive 00:10:35.317 suites 1 1 n/a 0 0 00:10:35.317 tests 1 1 1 0 0 00:10:35.317 asserts 15 15 15 0 n/a 00:10:35.317 00:10:35.317 Elapsed time = 0.012 seconds 00:10:35.317 00:10:35.317 real 0m0.181s 00:10:35.317 user 0m0.032s 00:10:35.317 sys 0m0.050s 00:10:35.317 15:04:30 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.317 15:04:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:35.317 ************************************ 00:10:35.317 END TEST env_mem_callbacks 00:10:35.317 ************************************ 00:10:35.317 15:04:30 env -- common/autotest_common.sh@1142 -- # return 0 00:10:35.317 00:10:35.317 real 0m2.811s 00:10:35.317 user 0m1.417s 00:10:35.317 sys 0m1.086s 00:10:35.317 15:04:30 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.317 15:04:30 env -- common/autotest_common.sh@10 -- # set +x 00:10:35.317 ************************************ 00:10:35.317 END TEST env 00:10:35.317 ************************************ 00:10:35.575 15:04:30 -- common/autotest_common.sh@1142 -- # return 0 00:10:35.575 15:04:30 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:35.575 15:04:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:35.575 15:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.575 15:04:30 -- common/autotest_common.sh@10 -- # set +x 00:10:35.575 ************************************ 00:10:35.575 START TEST rpc 00:10:35.575 ************************************ 00:10:35.575 15:04:30 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:35.575 * Looking for test storage... 00:10:35.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:35.575 15:04:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=80502 00:10:35.575 15:04:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:35.575 15:04:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:35.575 15:04:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 80502 00:10:35.575 15:04:30 rpc -- common/autotest_common.sh@829 -- # '[' -z 80502 ']' 00:10:35.575 15:04:30 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.575 15:04:30 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.575 15:04:30 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.575 15:04:30 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.575 15:04:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.575 [2024-07-23 15:04:30.963520] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:10:35.575 [2024-07-23 15:04:30.963781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80502 ] 00:10:35.831 [2024-07-23 15:04:31.117665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.831 [2024-07-23 15:04:31.178225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:35.831 [2024-07-23 15:04:31.178343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 80502' to capture a snapshot of events at runtime. 00:10:35.831 [2024-07-23 15:04:31.178366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.831 [2024-07-23 15:04:31.178389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.831 [2024-07-23 15:04:31.178416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid80502 for offline analysis/debug. 00:10:35.831 [2024-07-23 15:04:31.178500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.764 15:04:31 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.764 15:04:31 rpc -- common/autotest_common.sh@862 -- # return 0 00:10:36.764 15:04:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:36.764 15:04:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:36.764 15:04:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:36.764 15:04:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:36.764 15:04:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:36.764 15:04:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.764 15:04:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 ************************************ 00:10:36.764 START TEST rpc_integrity 00:10:36.764 ************************************ 00:10:36.764 15:04:31 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:10:36.764 15:04:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.764 15:04:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.764 15:04:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 15:04:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:36.764 { 00:10:36.764 "name": "Malloc0", 00:10:36.764 "aliases": [ 00:10:36.764 "036b70b6-d27e-4e79-9d12-a42aacdfb357" 00:10:36.764 ], 00:10:36.764 "product_name": "Malloc disk", 00:10:36.764 "block_size": 512, 00:10:36.764 "num_blocks": 16384, 00:10:36.764 "uuid": "036b70b6-d27e-4e79-9d12-a42aacdfb357", 00:10:36.764 "assigned_rate_limits": { 00:10:36.764 "rw_ios_per_sec": 0, 00:10:36.764 "rw_mbytes_per_sec": 0, 00:10:36.764 "r_mbytes_per_sec": 0, 00:10:36.764 "w_mbytes_per_sec": 0 00:10:36.764 }, 00:10:36.764 "claimed": false, 00:10:36.764 "zoned": false, 00:10:36.764 "supported_io_types": { 00:10:36.764 "read": true, 00:10:36.764 "write": true, 00:10:36.764 "unmap": true, 00:10:36.764 "flush": true, 00:10:36.764 "reset": true, 00:10:36.764 "nvme_admin": false, 00:10:36.764 "nvme_io": false, 00:10:36.764 "nvme_io_md": false, 00:10:36.764 "write_zeroes": true, 00:10:36.764 "zcopy": true, 00:10:36.764 "get_zone_info": false, 00:10:36.764 "zone_management": false, 00:10:36.764 "zone_append": false, 00:10:36.764 "compare": false, 00:10:36.764 "compare_and_write": false, 00:10:36.764 "abort": true, 00:10:36.764 "seek_hole": false, 00:10:36.764 "seek_data": false, 00:10:36.764 "copy": true, 00:10:36.764 "nvme_iov_md": false 00:10:36.764 }, 00:10:36.764 "memory_domains": [ 00:10:36.764 { 00:10:36.764 "dma_device_id": "system", 00:10:36.764 "dma_device_type": 1 00:10:36.764 }, 00:10:36.764 { 00:10:36.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.764 "dma_device_type": 2 00:10:36.764 } 00:10:36.764 ], 00:10:36.764 "driver_specific": {} 00:10:36.764 } 00:10:36.764 ]' 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 [2024-07-23 15:04:32.069354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:36.764 [2024-07-23 15:04:32.069469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.764 [2024-07-23 15:04:32.069516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006080 00:10:36.764 [2024-07-23 15:04:32.069561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.764 [2024-07-23 15:04:32.072626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.764 [2024-07-23 15:04:32.072689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:36.764 Passthru0 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.764 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:36.764 { 00:10:36.764 "name": "Malloc0", 00:10:36.764 "aliases": [ 00:10:36.764 "036b70b6-d27e-4e79-9d12-a42aacdfb357" 00:10:36.764 ], 00:10:36.764 "product_name": "Malloc disk", 00:10:36.764 "block_size": 512, 00:10:36.764 "num_blocks": 16384, 00:10:36.764 "uuid": "036b70b6-d27e-4e79-9d12-a42aacdfb357", 00:10:36.764 "assigned_rate_limits": { 00:10:36.764 "rw_ios_per_sec": 0, 00:10:36.764 "rw_mbytes_per_sec": 0, 00:10:36.764 "r_mbytes_per_sec": 0, 00:10:36.764 "w_mbytes_per_sec": 0 00:10:36.764 }, 00:10:36.764 "claimed": true, 00:10:36.764 "claim_type": "exclusive_write", 00:10:36.764 "zoned": false, 00:10:36.764 "supported_io_types": { 00:10:36.764 "read": true, 00:10:36.764 "write": true, 00:10:36.764 "unmap": true, 00:10:36.764 "flush": true, 00:10:36.764 "reset": true, 00:10:36.764 "nvme_admin": false, 00:10:36.764 "nvme_io": false, 00:10:36.764 "nvme_io_md": false, 00:10:36.764 "write_zeroes": true, 00:10:36.764 "zcopy": true, 00:10:36.764 "get_zone_info": false, 00:10:36.764 "zone_management": false, 00:10:36.764 "zone_append": false, 00:10:36.764 "compare": false, 00:10:36.764 "compare_and_write": false, 00:10:36.764 "abort": true, 00:10:36.764 "seek_hole": false, 00:10:36.764 "seek_data": false, 00:10:36.764 "copy": true, 00:10:36.764 "nvme_iov_md": false 00:10:36.764 }, 00:10:36.764 "memory_domains": [ 00:10:36.764 { 00:10:36.764 "dma_device_id": "system", 00:10:36.764 "dma_device_type": 1 00:10:36.764 }, 00:10:36.764 { 00:10:36.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.764 "dma_device_type": 2 00:10:36.764 } 00:10:36.764 ], 00:10:36.764 "driver_specific": {} 00:10:36.764 }, 00:10:36.764 { 00:10:36.764 "name": "Passthru0", 00:10:36.764 "aliases": [ 00:10:36.764 "256f73cc-f707-5ff8-86db-85dea87493ee" 00:10:36.764 ], 00:10:36.764 "product_name": "passthru", 00:10:36.764 "block_size": 512, 00:10:36.764 "num_blocks": 16384, 00:10:36.764 "uuid": "256f73cc-f707-5ff8-86db-85dea87493ee", 00:10:36.764 "assigned_rate_limits": { 00:10:36.764 "rw_ios_per_sec": 0, 00:10:36.764 "rw_mbytes_per_sec": 0, 00:10:36.764 "r_mbytes_per_sec": 0, 00:10:36.764 "w_mbytes_per_sec": 0 00:10:36.764 }, 00:10:36.764 "claimed": false, 00:10:36.764 "zoned": false, 00:10:36.764 "supported_io_types": { 00:10:36.764 "read": true, 00:10:36.764 "write": true, 00:10:36.764 "unmap": true, 00:10:36.764 "flush": true, 00:10:36.764 "reset": true, 00:10:36.764 "nvme_admin": false, 00:10:36.764 "nvme_io": false, 00:10:36.764 "nvme_io_md": false, 00:10:36.764 "write_zeroes": true, 00:10:36.764 "zcopy": true, 00:10:36.764 "get_zone_info": false, 00:10:36.764 "zone_management": false, 00:10:36.764 "zone_append": false, 00:10:36.764 "compare": false, 00:10:36.764 "compare_and_write": false, 00:10:36.764 "abort": true, 00:10:36.764 "seek_hole": false, 00:10:36.764 "seek_data": false, 00:10:36.764 "copy": true, 00:10:36.764 "nvme_iov_md": false 00:10:36.764 }, 00:10:36.765 "memory_domains": [ 00:10:36.765 { 00:10:36.765 "dma_device_id": "system", 00:10:36.765 "dma_device_type": 1 00:10:36.765 }, 00:10:36.765 { 00:10:36.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.765 "dma_device_type": 2 00:10:36.765 } 00:10:36.765 ], 00:10:36.765 "driver_specific": { 00:10:36.765 "passthru": { 00:10:36.765 "name": "Passthru0", 00:10:36.765 "base_bdev_name": "Malloc0" 00:10:36.765 } 00:10:36.765 } 00:10:36.765 } 00:10:36.765 ]' 00:10:36.765 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:36.765 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:36.765 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.765 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.765 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.765 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:36.765 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:36.765 15:04:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:36.765 00:10:36.765 real 0m0.179s 00:10:36.765 user 0m0.049s 00:10:36.765 sys 0m0.065s 00:10:36.765 ************************************ 00:10:36.765 END TEST rpc_integrity 00:10:36.765 ************************************ 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.765 15:04:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 15:04:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:37.023 15:04:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:37.023 15:04:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:37.023 15:04:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.023 15:04:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 ************************************ 00:10:37.023 START TEST rpc_plugins 00:10:37.023 ************************************ 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:37.023 { 00:10:37.023 "name": "Malloc1", 00:10:37.023 "aliases": [ 00:10:37.023 "3cf68dcf-b5bd-426d-a6d9-d6bb2a744795" 00:10:37.023 ], 00:10:37.023 "product_name": "Malloc disk", 00:10:37.023 "block_size": 4096, 00:10:37.023 "num_blocks": 256, 00:10:37.023 "uuid": "3cf68dcf-b5bd-426d-a6d9-d6bb2a744795", 00:10:37.023 "assigned_rate_limits": { 00:10:37.023 "rw_ios_per_sec": 0, 00:10:37.023 "rw_mbytes_per_sec": 0, 00:10:37.023 "r_mbytes_per_sec": 0, 00:10:37.023 "w_mbytes_per_sec": 0 00:10:37.023 }, 00:10:37.023 "claimed": false, 00:10:37.023 "zoned": false, 00:10:37.023 "supported_io_types": { 00:10:37.023 "read": true, 00:10:37.023 "write": true, 00:10:37.023 "unmap": true, 00:10:37.023 "flush": true, 00:10:37.023 "reset": true, 00:10:37.023 "nvme_admin": false, 00:10:37.023 "nvme_io": false, 00:10:37.023 "nvme_io_md": false, 00:10:37.023 "write_zeroes": true, 00:10:37.023 "zcopy": true, 00:10:37.023 "get_zone_info": false, 00:10:37.023 "zone_management": false, 00:10:37.023 "zone_append": false, 00:10:37.023 "compare": false, 00:10:37.023 "compare_and_write": false, 00:10:37.023 "abort": true, 00:10:37.023 "seek_hole": false, 00:10:37.023 "seek_data": false, 00:10:37.023 "copy": true, 00:10:37.023 "nvme_iov_md": false 00:10:37.023 }, 00:10:37.023 "memory_domains": [ 00:10:37.023 { 00:10:37.023 "dma_device_id": "system", 00:10:37.023 "dma_device_type": 1 00:10:37.023 }, 00:10:37.023 { 00:10:37.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.023 "dma_device_type": 2 00:10:37.023 } 00:10:37.023 ], 00:10:37.023 "driver_specific": {} 00:10:37.023 } 00:10:37.023 ]' 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:37.023 15:04:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:37.023 00:10:37.023 real 0m0.086s 00:10:37.023 user 0m0.030s 00:10:37.023 sys 0m0.022s 00:10:37.023 ************************************ 00:10:37.023 END TEST rpc_plugins 00:10:37.023 ************************************ 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.023 15:04:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 15:04:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:37.023 15:04:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:37.023 15:04:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:37.023 15:04:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.023 15:04:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 ************************************ 00:10:37.023 START TEST rpc_trace_cmd_test 00:10:37.023 ************************************ 00:10:37.023 15:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:10:37.023 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:37.023 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:37.023 15:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.023 15:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.023 15:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.023 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:37.023 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid80502", 00:10:37.023 "tpoint_group_mask": "0x8", 00:10:37.023 "iscsi_conn": { 00:10:37.023 "mask": "0x2", 00:10:37.023 "tpoint_mask": "0x0" 00:10:37.023 }, 00:10:37.023 "scsi": { 00:10:37.023 "mask": "0x4", 00:10:37.023 "tpoint_mask": "0x0" 00:10:37.023 }, 00:10:37.023 "bdev": { 00:10:37.023 "mask": "0x8", 00:10:37.023 "tpoint_mask": "0xffffffffffffffff" 00:10:37.023 }, 00:10:37.023 "nvmf_rdma": { 00:10:37.023 "mask": "0x10", 00:10:37.023 "tpoint_mask": "0x0" 00:10:37.023 }, 00:10:37.023 "nvmf_tcp": { 00:10:37.023 "mask": "0x20", 00:10:37.023 "tpoint_mask": "0x0" 00:10:37.023 }, 00:10:37.023 "ftl": { 00:10:37.023 "mask": "0x40", 00:10:37.023 "tpoint_mask": "0x0" 00:10:37.023 }, 00:10:37.023 "blobfs": { 00:10:37.023 "mask": "0x80", 00:10:37.023 "tpoint_mask": "0x0" 00:10:37.024 }, 00:10:37.024 "dsa": { 00:10:37.024 "mask": "0x200", 00:10:37.024 "tpoint_mask": "0x0" 00:10:37.024 }, 00:10:37.024 "thread": { 00:10:37.024 "mask": "0x400", 00:10:37.024 "tpoint_mask": "0x0" 00:10:37.024 }, 00:10:37.024 "nvme_pcie": { 00:10:37.024 "mask": "0x800", 00:10:37.024 "tpoint_mask": "0x0" 00:10:37.024 }, 00:10:37.024 "iaa": { 00:10:37.024 "mask": "0x1000", 00:10:37.024 "tpoint_mask": "0x0" 00:10:37.024 }, 00:10:37.024 "nvme_tcp": { 00:10:37.024 "mask": "0x2000", 00:10:37.024 "tpoint_mask": "0x0" 00:10:37.024 }, 00:10:37.024 "bdev_nvme": { 00:10:37.024 "mask": "0x4000", 00:10:37.024 "tpoint_mask": "0x0" 00:10:37.024 }, 00:10:37.024 "sock": { 00:10:37.024 "mask": "0x8000", 00:10:37.024 "tpoint_mask": "0x0" 00:10:37.024 } 00:10:37.024 }' 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:37.024 00:10:37.024 real 0m0.076s 00:10:37.024 user 0m0.033s 00:10:37.024 sys 0m0.038s 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.024 ************************************ 00:10:37.024 END TEST rpc_trace_cmd_test 00:10:37.024 ************************************ 00:10:37.024 15:04:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 15:04:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:37.282 15:04:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:37.282 15:04:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:37.282 15:04:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:37.282 15:04:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:37.282 15:04:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.282 15:04:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 ************************************ 00:10:37.282 START TEST rpc_daemon_integrity 00:10:37.282 ************************************ 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:37.282 { 00:10:37.282 "name": "Malloc2", 00:10:37.282 "aliases": [ 00:10:37.282 "31d29fc7-0b11-4b86-85a7-78eaac8595d0" 00:10:37.282 ], 00:10:37.282 "product_name": "Malloc disk", 00:10:37.282 "block_size": 512, 00:10:37.282 "num_blocks": 16384, 00:10:37.282 "uuid": "31d29fc7-0b11-4b86-85a7-78eaac8595d0", 00:10:37.282 "assigned_rate_limits": { 00:10:37.282 "rw_ios_per_sec": 0, 00:10:37.282 "rw_mbytes_per_sec": 0, 00:10:37.282 "r_mbytes_per_sec": 0, 00:10:37.282 "w_mbytes_per_sec": 0 00:10:37.282 }, 00:10:37.282 "claimed": false, 00:10:37.282 "zoned": false, 00:10:37.282 "supported_io_types": { 00:10:37.282 "read": true, 00:10:37.282 "write": true, 00:10:37.282 "unmap": true, 00:10:37.282 "flush": true, 00:10:37.282 "reset": true, 00:10:37.282 "nvme_admin": false, 00:10:37.282 "nvme_io": false, 00:10:37.282 "nvme_io_md": false, 00:10:37.282 "write_zeroes": true, 00:10:37.282 "zcopy": true, 00:10:37.282 "get_zone_info": false, 00:10:37.282 "zone_management": false, 00:10:37.282 "zone_append": false, 00:10:37.282 "compare": false, 00:10:37.282 "compare_and_write": false, 00:10:37.282 "abort": true, 00:10:37.282 "seek_hole": false, 00:10:37.282 "seek_data": false, 00:10:37.282 "copy": true, 00:10:37.282 "nvme_iov_md": false 00:10:37.282 }, 00:10:37.283 "memory_domains": [ 00:10:37.283 { 00:10:37.283 "dma_device_id": "system", 00:10:37.283 "dma_device_type": 1 00:10:37.283 }, 00:10:37.283 { 00:10:37.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.283 "dma_device_type": 2 00:10:37.283 } 00:10:37.283 ], 00:10:37.283 "driver_specific": {} 00:10:37.283 } 00:10:37.283 ]' 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.283 [2024-07-23 15:04:32.563384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:37.283 [2024-07-23 15:04:32.563479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.283 [2024-07-23 15:04:32.563512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:10:37.283 [2024-07-23 15:04:32.563543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.283 [2024-07-23 15:04:32.566476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.283 [2024-07-23 15:04:32.566540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:37.283 Passthru0 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:37.283 { 00:10:37.283 "name": "Malloc2", 00:10:37.283 "aliases": [ 00:10:37.283 "31d29fc7-0b11-4b86-85a7-78eaac8595d0" 00:10:37.283 ], 00:10:37.283 "product_name": "Malloc disk", 00:10:37.283 "block_size": 512, 00:10:37.283 "num_blocks": 16384, 00:10:37.283 "uuid": "31d29fc7-0b11-4b86-85a7-78eaac8595d0", 00:10:37.283 "assigned_rate_limits": { 00:10:37.283 "rw_ios_per_sec": 0, 00:10:37.283 "rw_mbytes_per_sec": 0, 00:10:37.283 "r_mbytes_per_sec": 0, 00:10:37.283 "w_mbytes_per_sec": 0 00:10:37.283 }, 00:10:37.283 "claimed": true, 00:10:37.283 "claim_type": "exclusive_write", 00:10:37.283 "zoned": false, 00:10:37.283 "supported_io_types": { 00:10:37.283 "read": true, 00:10:37.283 "write": true, 00:10:37.283 "unmap": true, 00:10:37.283 "flush": true, 00:10:37.283 "reset": true, 00:10:37.283 "nvme_admin": false, 00:10:37.283 "nvme_io": false, 00:10:37.283 "nvme_io_md": false, 00:10:37.283 "write_zeroes": true, 00:10:37.283 "zcopy": true, 00:10:37.283 "get_zone_info": false, 00:10:37.283 "zone_management": false, 00:10:37.283 "zone_append": false, 00:10:37.283 "compare": false, 00:10:37.283 "compare_and_write": false, 00:10:37.283 "abort": true, 00:10:37.283 "seek_hole": false, 00:10:37.283 "seek_data": false, 00:10:37.283 "copy": true, 00:10:37.283 "nvme_iov_md": false 00:10:37.283 }, 00:10:37.283 "memory_domains": [ 00:10:37.283 { 00:10:37.283 "dma_device_id": "system", 00:10:37.283 "dma_device_type": 1 00:10:37.283 }, 00:10:37.283 { 00:10:37.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.283 "dma_device_type": 2 00:10:37.283 } 00:10:37.283 ], 00:10:37.283 "driver_specific": {} 00:10:37.283 }, 00:10:37.283 { 00:10:37.283 "name": "Passthru0", 00:10:37.283 "aliases": [ 00:10:37.283 "1e9bc76b-038e-58e7-8d70-dd95b3f35f7f" 00:10:37.283 ], 00:10:37.283 "product_name": "passthru", 00:10:37.283 "block_size": 512, 00:10:37.283 "num_blocks": 16384, 00:10:37.283 "uuid": "1e9bc76b-038e-58e7-8d70-dd95b3f35f7f", 00:10:37.283 "assigned_rate_limits": { 00:10:37.283 "rw_ios_per_sec": 0, 00:10:37.283 "rw_mbytes_per_sec": 0, 00:10:37.283 "r_mbytes_per_sec": 0, 00:10:37.283 "w_mbytes_per_sec": 0 00:10:37.283 }, 00:10:37.283 "claimed": false, 00:10:37.283 "zoned": false, 00:10:37.283 "supported_io_types": { 00:10:37.283 "read": true, 00:10:37.283 "write": true, 00:10:37.283 "unmap": true, 00:10:37.283 "flush": true, 00:10:37.283 "reset": true, 00:10:37.283 "nvme_admin": false, 00:10:37.283 "nvme_io": false, 00:10:37.283 "nvme_io_md": false, 00:10:37.283 "write_zeroes": true, 00:10:37.283 "zcopy": true, 00:10:37.283 "get_zone_info": false, 00:10:37.283 "zone_management": false, 00:10:37.283 "zone_append": false, 00:10:37.283 "compare": false, 00:10:37.283 "compare_and_write": false, 00:10:37.283 "abort": true, 00:10:37.283 "seek_hole": false, 00:10:37.283 "seek_data": false, 00:10:37.283 "copy": true, 00:10:37.283 "nvme_iov_md": false 00:10:37.283 }, 00:10:37.283 "memory_domains": [ 00:10:37.283 { 00:10:37.283 "dma_device_id": "system", 00:10:37.283 "dma_device_type": 1 00:10:37.283 }, 00:10:37.283 { 00:10:37.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.283 "dma_device_type": 2 00:10:37.283 } 00:10:37.283 ], 00:10:37.283 "driver_specific": { 00:10:37.283 "passthru": { 00:10:37.283 "name": "Passthru0", 00:10:37.283 "base_bdev_name": "Malloc2" 00:10:37.283 } 00:10:37.283 } 00:10:37.283 } 00:10:37.283 ]' 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:37.283 00:10:37.283 real 0m0.164s 00:10:37.283 user 0m0.058s 00:10:37.283 sys 0m0.043s 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.283 15:04:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:37.283 ************************************ 00:10:37.283 END TEST rpc_daemon_integrity 00:10:37.283 ************************************ 00:10:37.283 15:04:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:37.283 15:04:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:37.283 15:04:32 rpc -- rpc/rpc.sh@84 -- # killprocess 80502 00:10:37.283 15:04:32 rpc -- common/autotest_common.sh@948 -- # '[' -z 80502 ']' 00:10:37.283 15:04:32 rpc -- common/autotest_common.sh@952 -- # kill -0 80502 00:10:37.283 15:04:32 rpc -- common/autotest_common.sh@953 -- # uname 00:10:37.283 15:04:32 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.283 15:04:32 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80502 00:10:37.541 15:04:32 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.541 15:04:32 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.541 15:04:32 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80502' 00:10:37.541 killing process with pid 80502 00:10:37.541 15:04:32 rpc -- common/autotest_common.sh@967 -- # kill 80502 00:10:37.541 15:04:32 rpc -- common/autotest_common.sh@972 -- # wait 80502 00:10:37.799 00:10:37.799 real 0m2.379s 00:10:37.799 user 0m2.620s 00:10:37.799 sys 0m0.853s 00:10:37.799 15:04:33 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.799 15:04:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.799 ************************************ 00:10:37.799 END TEST rpc 00:10:37.799 ************************************ 00:10:37.799 15:04:33 -- common/autotest_common.sh@1142 -- # return 0 00:10:37.799 15:04:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:37.799 15:04:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:37.799 15:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.799 15:04:33 -- common/autotest_common.sh@10 -- # set +x 00:10:37.799 ************************************ 00:10:37.799 START TEST skip_rpc 00:10:37.799 ************************************ 00:10:37.799 15:04:33 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:38.057 * Looking for test storage... 00:10:38.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:38.057 15:04:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:38.057 15:04:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:38.057 15:04:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:38.057 15:04:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:38.057 15:04:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.057 15:04:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.057 ************************************ 00:10:38.057 START TEST skip_rpc 00:10:38.057 ************************************ 00:10:38.057 15:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:10:38.057 15:04:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=80803 00:10:38.057 15:04:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:38.057 15:04:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:38.057 15:04:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:38.057 [2024-07-23 15:04:33.390405] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:10:38.057 [2024-07-23 15:04:33.390612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80803 ] 00:10:38.315 [2024-07-23 15:04:33.546393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.315 [2024-07-23 15:04:33.606669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 80803 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 80803 ']' 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 80803 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:10:43.633 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.634 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80803 00:10:43.634 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:43.634 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:43.634 killing process with pid 80803 00:10:43.634 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80803' 00:10:43.634 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 80803 00:10:43.634 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 80803 00:10:43.634 00:10:43.634 real 0m5.463s 00:10:43.634 user 0m5.024s 00:10:43.634 sys 0m0.375s 00:10:43.634 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.634 15:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 ************************************ 00:10:43.634 END TEST skip_rpc 00:10:43.634 ************************************ 00:10:43.634 15:04:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:43.634 15:04:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:43.634 15:04:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:43.634 15:04:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.634 15:04:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 ************************************ 00:10:43.634 START TEST skip_rpc_with_json 00:10:43.634 ************************************ 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=80885 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 80885 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 80885 ']' 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.634 15:04:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 [2024-07-23 15:04:38.905216] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:10:43.634 [2024-07-23 15:04:38.905451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80885 ] 00:10:43.892 [2024-07-23 15:04:39.062468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.892 [2024-07-23 15:04:39.122366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:44.458 [2024-07-23 15:04:39.827719] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:44.458 request: 00:10:44.458 { 00:10:44.458 "trtype": "tcp", 00:10:44.458 "method": "nvmf_get_transports", 00:10:44.458 "req_id": 1 00:10:44.458 } 00:10:44.458 Got JSON-RPC error response 00:10:44.458 response: 00:10:44.458 { 00:10:44.458 "code": -19, 00:10:44.458 "message": "No such device" 00:10:44.458 } 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:44.458 [2024-07-23 15:04:39.839849] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.458 15:04:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:44.717 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.717 15:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:44.717 { 00:10:44.717 "subsystems": [ 00:10:44.717 { 00:10:44.717 "subsystem": "scheduler", 00:10:44.717 "config": [ 00:10:44.717 { 00:10:44.717 "method": "framework_set_scheduler", 00:10:44.717 "params": { 00:10:44.717 "name": "static" 00:10:44.717 } 00:10:44.717 } 00:10:44.717 ] 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "subsystem": "vmd", 00:10:44.717 "config": [] 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "subsystem": "sock", 00:10:44.717 "config": [ 00:10:44.717 { 00:10:44.717 "method": "sock_set_default_impl", 00:10:44.717 "params": { 00:10:44.717 "impl_name": "posix" 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "sock_impl_set_options", 00:10:44.717 "params": { 00:10:44.717 "impl_name": "ssl", 00:10:44.717 "recv_buf_size": 4096, 00:10:44.717 "send_buf_size": 4096, 00:10:44.717 "enable_recv_pipe": true, 00:10:44.717 "enable_quickack": false, 00:10:44.717 "enable_placement_id": 0, 00:10:44.717 "enable_zerocopy_send_server": true, 00:10:44.717 "enable_zerocopy_send_client": false, 00:10:44.717 "zerocopy_threshold": 0, 00:10:44.717 "tls_version": 0, 00:10:44.717 "enable_ktls": false 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "sock_impl_set_options", 00:10:44.717 "params": { 00:10:44.717 "impl_name": "posix", 00:10:44.717 "recv_buf_size": 2097152, 00:10:44.717 "send_buf_size": 2097152, 00:10:44.717 "enable_recv_pipe": true, 00:10:44.717 "enable_quickack": false, 00:10:44.717 "enable_placement_id": 0, 00:10:44.717 "enable_zerocopy_send_server": true, 00:10:44.717 "enable_zerocopy_send_client": false, 00:10:44.717 "zerocopy_threshold": 0, 00:10:44.717 "tls_version": 0, 00:10:44.717 "enable_ktls": false 00:10:44.717 } 00:10:44.717 } 00:10:44.717 ] 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "subsystem": "iobuf", 00:10:44.717 "config": [ 00:10:44.717 { 00:10:44.717 "method": "iobuf_set_options", 00:10:44.717 "params": { 00:10:44.717 "small_pool_count": 8192, 00:10:44.717 "large_pool_count": 1024, 00:10:44.717 "small_bufsize": 8192, 00:10:44.717 "large_bufsize": 135168 00:10:44.717 } 00:10:44.717 } 00:10:44.717 ] 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "subsystem": "keyring", 00:10:44.717 "config": [] 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "subsystem": "accel", 00:10:44.717 "config": [ 00:10:44.717 { 00:10:44.717 "method": "accel_set_options", 00:10:44.717 "params": { 00:10:44.717 "small_cache_size": 128, 00:10:44.717 "large_cache_size": 16, 00:10:44.717 "task_count": 2048, 00:10:44.717 "sequence_count": 2048, 00:10:44.717 "buf_count": 2048 00:10:44.717 } 00:10:44.717 } 00:10:44.717 ] 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "subsystem": "bdev", 00:10:44.717 "config": [ 00:10:44.717 { 00:10:44.717 "method": "bdev_set_options", 00:10:44.717 "params": { 00:10:44.717 "bdev_io_pool_size": 65535, 00:10:44.717 "bdev_io_cache_size": 256, 00:10:44.717 "bdev_auto_examine": true, 00:10:44.717 "iobuf_small_cache_size": 128, 00:10:44.717 "iobuf_large_cache_size": 16 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "bdev_raid_set_options", 00:10:44.717 "params": { 00:10:44.717 "process_window_size_kb": 1024, 00:10:44.717 "process_max_bandwidth_mb_sec": 0 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "bdev_nvme_set_options", 00:10:44.717 "params": { 00:10:44.717 "action_on_timeout": "none", 00:10:44.717 "timeout_us": 0, 00:10:44.717 "timeout_admin_us": 0, 00:10:44.717 "keep_alive_timeout_ms": 10000, 00:10:44.717 "arbitration_burst": 0, 00:10:44.717 "low_priority_weight": 0, 00:10:44.717 "medium_priority_weight": 0, 00:10:44.717 "high_priority_weight": 0, 00:10:44.717 "nvme_adminq_poll_period_us": 10000, 00:10:44.717 "nvme_ioq_poll_period_us": 0, 00:10:44.717 "io_queue_requests": 0, 00:10:44.717 "delay_cmd_submit": true, 00:10:44.717 "transport_retry_count": 4, 00:10:44.717 "bdev_retry_count": 3, 00:10:44.717 "transport_ack_timeout": 0, 00:10:44.717 "ctrlr_loss_timeout_sec": 0, 00:10:44.717 "reconnect_delay_sec": 0, 00:10:44.717 "fast_io_fail_timeout_sec": 0, 00:10:44.717 "disable_auto_failback": false, 00:10:44.717 "generate_uuids": false, 00:10:44.717 "transport_tos": 0, 00:10:44.717 "nvme_error_stat": false, 00:10:44.717 "rdma_srq_size": 0, 00:10:44.717 "io_path_stat": false, 00:10:44.717 "allow_accel_sequence": false, 00:10:44.717 "rdma_max_cq_size": 0, 00:10:44.717 "rdma_cm_event_timeout_ms": 0, 00:10:44.717 "dhchap_digests": [ 00:10:44.717 "sha256", 00:10:44.717 "sha384", 00:10:44.717 "sha512" 00:10:44.717 ], 00:10:44.717 "dhchap_dhgroups": [ 00:10:44.717 "null", 00:10:44.717 "ffdhe2048", 00:10:44.717 "ffdhe3072", 00:10:44.717 "ffdhe4096", 00:10:44.717 "ffdhe6144", 00:10:44.717 "ffdhe8192" 00:10:44.717 ] 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "bdev_nvme_set_hotplug", 00:10:44.717 "params": { 00:10:44.717 "period_us": 100000, 00:10:44.717 "enable": false 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "bdev_iscsi_set_options", 00:10:44.717 "params": { 00:10:44.717 "timeout_sec": 30 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "bdev_wait_for_examine" 00:10:44.717 } 00:10:44.717 ] 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "subsystem": "nvmf", 00:10:44.717 "config": [ 00:10:44.717 { 00:10:44.717 "method": "nvmf_set_config", 00:10:44.717 "params": { 00:10:44.717 "discovery_filter": "match_any", 00:10:44.717 "admin_cmd_passthru": { 00:10:44.717 "identify_ctrlr": false 00:10:44.717 } 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "nvmf_set_max_subsystems", 00:10:44.717 "params": { 00:10:44.717 "max_subsystems": 1024 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "nvmf_set_crdt", 00:10:44.717 "params": { 00:10:44.717 "crdt1": 0, 00:10:44.717 "crdt2": 0, 00:10:44.717 "crdt3": 0 00:10:44.717 } 00:10:44.717 }, 00:10:44.717 { 00:10:44.717 "method": "nvmf_create_transport", 00:10:44.717 "params": { 00:10:44.717 "trtype": "TCP", 00:10:44.717 "max_queue_depth": 128, 00:10:44.717 "max_io_qpairs_per_ctrlr": 127, 00:10:44.717 "in_capsule_data_size": 4096, 00:10:44.717 "max_io_size": 131072, 00:10:44.717 "io_unit_size": 131072, 00:10:44.717 "max_aq_depth": 128, 00:10:44.717 "num_shared_buffers": 511, 00:10:44.717 "buf_cache_size": 4294967295, 00:10:44.717 "dif_insert_or_strip": false, 00:10:44.717 "zcopy": false, 00:10:44.718 "c2h_success": true, 00:10:44.718 "sock_priority": 0, 00:10:44.718 "abort_timeout_sec": 1, 00:10:44.718 "ack_timeout": 0, 00:10:44.718 "data_wr_pool_size": 0 00:10:44.718 } 00:10:44.718 } 00:10:44.718 ] 00:10:44.718 }, 00:10:44.718 { 00:10:44.718 "subsystem": "nbd", 00:10:44.718 "config": [] 00:10:44.718 }, 00:10:44.718 { 00:10:44.718 "subsystem": "ublk", 00:10:44.718 "config": [] 00:10:44.718 }, 00:10:44.718 { 00:10:44.718 "subsystem": "vhost_blk", 00:10:44.718 "config": [] 00:10:44.718 }, 00:10:44.718 { 00:10:44.718 "subsystem": "scsi", 00:10:44.718 "config": null 00:10:44.718 }, 00:10:44.718 { 00:10:44.718 "subsystem": "iscsi", 00:10:44.718 "config": [ 00:10:44.718 { 00:10:44.718 "method": "iscsi_set_options", 00:10:44.718 "params": { 00:10:44.718 "node_base": "iqn.2016-06.io.spdk", 00:10:44.718 "max_sessions": 128, 00:10:44.718 "max_connections_per_session": 2, 00:10:44.718 "max_queue_depth": 64, 00:10:44.718 "default_time2wait": 2, 00:10:44.718 "default_time2retain": 20, 00:10:44.718 "first_burst_length": 8192, 00:10:44.718 "immediate_data": true, 00:10:44.718 "allow_duplicated_isid": false, 00:10:44.718 "error_recovery_level": 0, 00:10:44.718 "nop_timeout": 60, 00:10:44.718 "nop_in_interval": 30, 00:10:44.718 "disable_chap": false, 00:10:44.718 "require_chap": false, 00:10:44.718 "mutual_chap": false, 00:10:44.718 "chap_group": 0, 00:10:44.718 "max_large_datain_per_connection": 64, 00:10:44.718 "max_r2t_per_connection": 4, 00:10:44.718 "pdu_pool_size": 36864, 00:10:44.718 "immediate_data_pool_size": 16384, 00:10:44.718 "data_out_pool_size": 2048 00:10:44.718 } 00:10:44.718 } 00:10:44.718 ] 00:10:44.718 }, 00:10:44.718 { 00:10:44.718 "subsystem": "vhost_scsi", 00:10:44.718 "config": [] 00:10:44.718 } 00:10:44.718 ] 00:10:44.718 } 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 80885 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 80885 ']' 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 80885 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80885 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:44.718 killing process with pid 80885 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80885' 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 80885 00:10:44.718 15:04:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 80885 00:10:45.286 15:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=80914 00:10:45.286 15:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:45.286 15:04:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 80914 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 80914 ']' 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 80914 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80914 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:50.555 killing process with pid 80914 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80914' 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 80914 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 80914 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:50.555 00:10:50.555 real 0m7.088s 00:10:50.555 user 0m6.681s 00:10:50.555 sys 0m0.818s 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.555 ************************************ 00:10:50.555 END TEST skip_rpc_with_json 00:10:50.555 ************************************ 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:50.555 15:04:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:50.555 15:04:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:50.555 15:04:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:50.555 15:04:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.555 15:04:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.555 ************************************ 00:10:50.555 START TEST skip_rpc_with_delay 00:10:50.555 ************************************ 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:50.555 15:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:50.813 [2024-07-23 15:04:46.044419] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:50.813 [2024-07-23 15:04:46.044642] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:10:50.813 15:04:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:10:50.813 15:04:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:50.813 15:04:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:50.813 15:04:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:50.813 00:10:50.813 real 0m0.157s 00:10:50.813 user 0m0.085s 00:10:50.813 sys 0m0.073s 00:10:50.813 15:04:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.813 15:04:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:50.813 ************************************ 00:10:50.813 END TEST skip_rpc_with_delay 00:10:50.813 ************************************ 00:10:50.813 15:04:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:50.813 15:04:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:50.813 15:04:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:50.813 15:04:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:50.813 15:04:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:50.813 15:04:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.813 15:04:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.813 ************************************ 00:10:50.813 START TEST exit_on_failed_rpc_init 00:10:50.813 ************************************ 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=81031 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 81031 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 81031 ']' 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.813 15:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:51.071 [2024-07-23 15:04:46.259320] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:10:51.071 [2024-07-23 15:04:46.259601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81031 ] 00:10:51.071 [2024-07-23 15:04:46.410995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.071 [2024-07-23 15:04:46.490433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.011 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.011 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:10:52.011 15:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.011 15:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:52.011 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:10:52.011 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:52.011 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.012 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.012 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.012 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.012 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.012 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.012 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.012 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:52.012 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:52.012 [2024-07-23 15:04:47.309040] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:10:52.012 [2024-07-23 15:04:47.309231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81049 ] 00:10:52.269 [2024-07-23 15:04:47.455125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.269 [2024-07-23 15:04:47.546070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.269 [2024-07-23 15:04:47.546271] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:52.269 [2024-07-23 15:04:47.546317] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:52.270 [2024-07-23 15:04:47.546368] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 81031 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 81031 ']' 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 81031 00:10:52.270 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:10:52.527 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.527 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81031 00:10:52.527 killing process with pid 81031 00:10:52.527 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:52.527 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:52.527 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81031' 00:10:52.527 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 81031 00:10:52.527 15:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 81031 00:10:52.784 00:10:52.784 real 0m1.963s 00:10:52.784 user 0m2.242s 00:10:52.784 sys 0m0.570s 00:10:52.784 15:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.784 ************************************ 00:10:52.784 END TEST exit_on_failed_rpc_init 00:10:52.784 ************************************ 00:10:52.784 15:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:52.784 15:04:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:52.784 15:04:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:52.784 00:10:52.784 real 0m14.976s 00:10:52.784 user 0m14.137s 00:10:52.784 sys 0m2.041s 00:10:52.784 15:04:48 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.784 15:04:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.784 ************************************ 00:10:52.784 END TEST skip_rpc 00:10:52.784 ************************************ 00:10:53.041 15:04:48 -- common/autotest_common.sh@1142 -- # return 0 00:10:53.041 15:04:48 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:53.041 15:04:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:53.041 15:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.041 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:10:53.041 ************************************ 00:10:53.041 START TEST rpc_client 00:10:53.041 ************************************ 00:10:53.042 15:04:48 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:53.042 * Looking for test storage... 00:10:53.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:53.042 15:04:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:53.042 OK 00:10:53.042 15:04:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:53.042 00:10:53.042 real 0m0.144s 00:10:53.042 user 0m0.057s 00:10:53.042 sys 0m0.096s 00:10:53.042 15:04:48 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.042 15:04:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:53.042 ************************************ 00:10:53.042 END TEST rpc_client 00:10:53.042 ************************************ 00:10:53.042 15:04:48 -- common/autotest_common.sh@1142 -- # return 0 00:10:53.042 15:04:48 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:53.042 15:04:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:53.042 15:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.042 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:10:53.042 ************************************ 00:10:53.042 START TEST json_config 00:10:53.042 ************************************ 00:10:53.042 15:04:48 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db4a2233-2afc-4dde-b9ec-9e18d94548e8 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=db4a2233-2afc-4dde-b9ec-9e18d94548e8 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.300 15:04:48 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.300 15:04:48 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.300 15:04:48 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.300 15:04:48 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:53.300 15:04:48 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:53.300 15:04:48 json_config -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:53.300 15:04:48 json_config -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:53.300 15:04:48 json_config -- paths/export.sh@6 -- # export PATH 00:10:53.300 15:04:48 json_config -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@47 -- # : 0 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.300 15:04:48 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:53.300 INFO: JSON configuration test init 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:10:53.300 15:04:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:53.300 15:04:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:10:53.300 15:04:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:53.300 15:04:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:53.300 15:04:48 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:10:53.300 15:04:48 json_config -- json_config/common.sh@9 -- # local app=target 00:10:53.300 15:04:48 json_config -- json_config/common.sh@10 -- # shift 00:10:53.300 15:04:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:53.300 Waiting for target to run... 00:10:53.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:53.301 15:04:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:53.301 15:04:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:53.301 15:04:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:53.301 15:04:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:53.301 15:04:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=81170 00:10:53.301 15:04:48 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:53.301 15:04:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:53.301 15:04:48 json_config -- json_config/common.sh@25 -- # waitforlisten 81170 /var/tmp/spdk_tgt.sock 00:10:53.301 15:04:48 json_config -- common/autotest_common.sh@829 -- # '[' -z 81170 ']' 00:10:53.301 15:04:48 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:53.301 15:04:48 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.301 15:04:48 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:53.301 15:04:48 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.301 15:04:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:53.301 [2024-07-23 15:04:48.620231] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:10:53.301 [2024-07-23 15:04:48.620417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81170 ] 00:10:53.866 [2024-07-23 15:04:48.991224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.866 [2024-07-23 15:04:49.035831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.430 00:10:54.430 15:04:49 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.430 15:04:49 json_config -- common/autotest_common.sh@862 -- # return 0 00:10:54.430 15:04:49 json_config -- json_config/common.sh@26 -- # echo '' 00:10:54.430 15:04:49 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:10:54.430 15:04:49 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:10:54.430 15:04:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:54.430 15:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:54.430 15:04:49 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:10:54.430 15:04:49 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:10:54.430 15:04:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:54.430 15:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:54.430 15:04:49 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:54.430 15:04:49 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:10:54.430 15:04:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:54.687 15:04:50 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:10:54.687 15:04:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:54.687 15:04:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:54.687 15:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:54.687 15:04:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:54.687 15:04:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:54.687 15:04:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:54.687 15:04:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:10:54.687 15:04:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:54.687 15:04:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@51 -- # sort 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:10:54.944 15:04:50 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:10:54.944 15:04:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:54.944 15:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@59 -- # return 0 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:10:55.201 15:04:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.201 15:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:10:55.201 15:04:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:55.201 15:04:50 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:55.458 15:04:50 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:10:55.458 15:04:50 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:55.458 15:04:50 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:55.458 15:04:50 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:10:55.458 15:04:50 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:10:55.458 15:04:50 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:10:55.458 15:04:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:10:55.715 Nvme0n1p0 Nvme0n1p1 00:10:55.715 15:04:50 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:10:55.715 15:04:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:10:55.972 [2024-07-23 15:04:51.167903] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:55.972 [2024-07-23 15:04:51.168003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:55.972 00:10:55.972 15:04:51 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:10:55.972 15:04:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:10:56.229 Malloc3 00:10:56.229 15:04:51 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:56.229 15:04:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:56.487 [2024-07-23 15:04:51.704113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:56.487 [2024-07-23 15:04:51.704219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.487 [2024-07-23 15:04:51.704268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:10:56.487 [2024-07-23 15:04:51.704291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.487 [2024-07-23 15:04:51.707254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.487 [2024-07-23 15:04:51.707307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:56.487 PTBdevFromMalloc3 00:10:56.487 15:04:51 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:10:56.487 15:04:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:10:56.744 Null0 00:10:56.744 15:04:52 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:10:56.744 15:04:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:10:57.001 Malloc0 00:10:57.001 15:04:52 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:10:57.001 15:04:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:57.259 Malloc1 00:10:57.259 15:04:52 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:57.259 15:04:52 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:57.517 102400+0 records in 00:10:57.517 102400+0 records out 00:10:57.517 104857600 bytes (105 MB, 100 MiB) copied, 0.328983 s, 319 MB/s 00:10:57.517 15:04:52 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:57.517 15:04:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:57.775 aio_disk 00:10:57.775 15:04:53 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:57.775 15:04:53 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:57.775 15:04:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:58.033 2e557974-2c87-4446-93b0-4895024805ee 00:10:58.033 15:04:53 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:58.033 15:04:53 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:58.033 15:04:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:58.289 15:04:53 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:58.289 15:04:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:58.546 15:04:53 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:58.546 15:04:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:58.803 15:04:54 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:58.804 15:04:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@161 -- # [[ 0 -eq 1 ]] 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:992380c2-4ca1-43b5-bb0c-75e0560a3b32 bdev_register:7eaf25e8-a3ba-4c0d-a964-2a217b2d7d53 bdev_register:9cebdd6b-38e3-454c-8a5e-870b90c51296 bdev_register:d2d7a367-bdb4-44a3-8d39-be89e26ef626 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@75 -- # sort 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:992380c2-4ca1-43b5-bb0c-75e0560a3b32 bdev_register:7eaf25e8-a3ba-4c0d-a964-2a217b2d7d53 bdev_register:9cebdd6b-38e3-454c-8a5e-870b90c51296 bdev_register:d2d7a367-bdb4-44a3-8d39-be89e26ef626 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@76 -- # sort 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:10:59.062 15:04:54 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:59.062 15:04:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.320 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:992380c2-4ca1-43b5-bb0c-75e0560a3b32 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:7eaf25e8-a3ba-4c0d-a964-2a217b2d7d53 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:9cebdd6b-38e3-454c-8a5e-870b90c51296 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:d2d7a367-bdb4-44a3-8d39-be89e26ef626 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:7eaf25e8-a3ba-4c0d-a964-2a217b2d7d53 bdev_register:992380c2-4ca1-43b5-bb0c-75e0560a3b32 bdev_register:9cebdd6b-38e3-454c-8a5e-870b90c51296 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:d2d7a367-bdb4-44a3-8d39-be89e26ef626 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\e\a\f\2\5\e\8\-\a\3\b\a\-\4\c\0\d\-\a\9\6\4\-\2\a\2\1\7\b\2\d\7\d\5\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\9\2\3\8\0\c\2\-\4\c\a\1\-\4\3\b\5\-\b\b\0\c\-\7\5\e\0\5\6\0\a\3\b\3\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\c\e\b\d\d\6\b\-\3\8\e\3\-\4\5\4\c\-\8\a\5\e\-\8\7\0\b\9\0\c\5\1\2\9\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\2\d\7\a\3\6\7\-\b\d\b\4\-\4\4\a\3\-\8\d\3\9\-\b\e\8\9\e\2\6\e\f\6\2\6 ]] 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@90 -- # cat 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:7eaf25e8-a3ba-4c0d-a964-2a217b2d7d53 bdev_register:992380c2-4ca1-43b5-bb0c-75e0560a3b32 bdev_register:9cebdd6b-38e3-454c-8a5e-870b90c51296 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:d2d7a367-bdb4-44a3-8d39-be89e26ef626 00:10:59.321 Expected events matched: 00:10:59.321 bdev_register:7eaf25e8-a3ba-4c0d-a964-2a217b2d7d53 00:10:59.321 bdev_register:992380c2-4ca1-43b5-bb0c-75e0560a3b32 00:10:59.321 bdev_register:9cebdd6b-38e3-454c-8a5e-870b90c51296 00:10:59.321 bdev_register:Malloc0 00:10:59.321 bdev_register:Malloc0p0 00:10:59.321 bdev_register:Malloc0p1 00:10:59.321 bdev_register:Malloc0p2 00:10:59.321 bdev_register:Malloc1 00:10:59.321 bdev_register:Malloc3 00:10:59.321 bdev_register:Null0 00:10:59.321 bdev_register:Nvme0n1 00:10:59.321 bdev_register:Nvme0n1p0 00:10:59.321 bdev_register:Nvme0n1p1 00:10:59.321 bdev_register:PTBdevFromMalloc3 00:10:59.321 bdev_register:aio_disk 00:10:59.321 bdev_register:d2d7a367-bdb4-44a3-8d39-be89e26ef626 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:10:59.321 15:04:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:59.321 15:04:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:10:59.321 15:04:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:59.321 15:04:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:10:59.321 15:04:54 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:59.321 15:04:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:59.579 MallocBdevForConfigChangeCheck 00:10:59.579 15:04:54 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:10:59.579 15:04:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:59.579 15:04:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:59.579 15:04:54 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:10:59.579 15:04:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:00.172 INFO: shutting down applications... 00:11:00.172 15:04:55 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:11:00.172 15:04:55 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:11:00.172 15:04:55 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:11:00.172 15:04:55 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:11:00.172 15:04:55 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:00.172 [2024-07-23 15:04:55.586654] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:11:00.430 Calling clear_vhost_scsi_subsystem 00:11:00.430 Calling clear_iscsi_subsystem 00:11:00.430 Calling clear_vhost_blk_subsystem 00:11:00.430 Calling clear_ublk_subsystem 00:11:00.430 Calling clear_nbd_subsystem 00:11:00.430 Calling clear_nvmf_subsystem 00:11:00.430 Calling clear_bdev_subsystem 00:11:00.430 15:04:55 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:00.430 15:04:55 json_config -- json_config/json_config.sh@347 -- # count=100 00:11:00.430 15:04:55 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:11:00.430 15:04:55 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:00.430 15:04:55 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:00.430 15:04:55 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:00.997 15:04:56 json_config -- json_config/json_config.sh@349 -- # break 00:11:00.997 15:04:56 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:11:00.997 15:04:56 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:11:00.997 15:04:56 json_config -- json_config/common.sh@31 -- # local app=target 00:11:00.997 15:04:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:00.997 15:04:56 json_config -- json_config/common.sh@35 -- # [[ -n 81170 ]] 00:11:00.997 15:04:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 81170 00:11:00.997 15:04:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:00.997 15:04:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:00.997 15:04:56 json_config -- json_config/common.sh@41 -- # kill -0 81170 00:11:00.997 15:04:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:11:01.255 15:04:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:11:01.255 15:04:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:01.255 15:04:56 json_config -- json_config/common.sh@41 -- # kill -0 81170 00:11:01.255 SPDK target shutdown done 00:11:01.255 15:04:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:01.255 15:04:56 json_config -- json_config/common.sh@43 -- # break 00:11:01.255 15:04:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:01.255 15:04:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:01.255 INFO: relaunching applications... 00:11:01.255 15:04:56 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:11:01.255 15:04:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:01.255 15:04:56 json_config -- json_config/common.sh@9 -- # local app=target 00:11:01.255 15:04:56 json_config -- json_config/common.sh@10 -- # shift 00:11:01.255 15:04:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:01.255 15:04:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:01.255 15:04:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:01.255 15:04:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:01.255 15:04:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:01.255 Waiting for target to run... 00:11:01.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:01.255 15:04:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=81414 00:11:01.255 15:04:56 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:01.255 15:04:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:01.255 15:04:56 json_config -- json_config/common.sh@25 -- # waitforlisten 81414 /var/tmp/spdk_tgt.sock 00:11:01.255 15:04:56 json_config -- common/autotest_common.sh@829 -- # '[' -z 81414 ']' 00:11:01.255 15:04:56 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:01.255 15:04:56 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.255 15:04:56 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:01.255 15:04:56 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.255 15:04:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.512 [2024-07-23 15:04:56.721371] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:01.512 [2024-07-23 15:04:56.721767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81414 ] 00:11:01.771 [2024-07-23 15:04:57.099088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.771 [2024-07-23 15:04:57.130845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.030 [2024-07-23 15:04:57.285344] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:11:02.030 [2024-07-23 15:04:57.285653] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:11:02.030 [2024-07-23 15:04:57.293298] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:02.030 [2024-07-23 15:04:57.293508] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:11:02.030 [2024-07-23 15:04:57.301334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:02.030 [2024-07-23 15:04:57.301577] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:02.030 [2024-07-23 15:04:57.301699] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:02.030 [2024-07-23 15:04:57.386978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:02.030 [2024-07-23 15:04:57.387487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.030 [2024-07-23 15:04:57.387534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:11:02.030 [2024-07-23 15:04:57.387560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.030 [2024-07-23 15:04:57.388148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.030 [2024-07-23 15:04:57.388177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:11:02.288 15:04:57 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.288 15:04:57 json_config -- common/autotest_common.sh@862 -- # return 0 00:11:02.288 15:04:57 json_config -- json_config/common.sh@26 -- # echo '' 00:11:02.288 00:11:02.288 INFO: Checking if target configuration is the same... 00:11:02.288 15:04:57 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:11:02.288 15:04:57 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:02.288 15:04:57 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:02.288 15:04:57 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:11:02.288 15:04:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:02.288 + '[' 2 -ne 2 ']' 00:11:02.288 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:02.288 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:02.288 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:02.288 +++ basename /dev/fd/62 00:11:02.288 ++ mktemp /tmp/62.XXX 00:11:02.288 + tmp_file_1=/tmp/62.LF2 00:11:02.288 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:02.288 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:02.288 + tmp_file_2=/tmp/spdk_tgt_config.json.dsZ 00:11:02.288 + ret=0 00:11:02.288 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:02.854 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:02.854 + diff -u /tmp/62.LF2 /tmp/spdk_tgt_config.json.dsZ 00:11:02.854 INFO: JSON config files are the same 00:11:02.854 + echo 'INFO: JSON config files are the same' 00:11:02.854 + rm /tmp/62.LF2 /tmp/spdk_tgt_config.json.dsZ 00:11:02.854 + exit 0 00:11:02.854 INFO: changing configuration and checking if this can be detected... 00:11:02.854 15:04:58 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:11:02.854 15:04:58 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:02.854 15:04:58 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:02.854 15:04:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:02.854 15:04:58 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:11:02.854 15:04:58 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:02.854 15:04:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:02.854 + '[' 2 -ne 2 ']' 00:11:02.854 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:02.854 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:02.854 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:02.854 +++ basename /dev/fd/62 00:11:02.854 ++ mktemp /tmp/62.XXX 00:11:02.854 + tmp_file_1=/tmp/62.F17 00:11:02.854 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:02.854 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:03.112 + tmp_file_2=/tmp/spdk_tgt_config.json.XnB 00:11:03.112 + ret=0 00:11:03.112 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:03.370 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:03.370 + diff -u /tmp/62.F17 /tmp/spdk_tgt_config.json.XnB 00:11:03.370 + ret=1 00:11:03.370 + echo '=== Start of file: /tmp/62.F17 ===' 00:11:03.370 + cat /tmp/62.F17 00:11:03.370 + echo '=== End of file: /tmp/62.F17 ===' 00:11:03.370 + echo '' 00:11:03.370 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XnB ===' 00:11:03.370 + cat /tmp/spdk_tgt_config.json.XnB 00:11:03.370 + echo '=== End of file: /tmp/spdk_tgt_config.json.XnB ===' 00:11:03.370 + echo '' 00:11:03.370 + rm /tmp/62.F17 /tmp/spdk_tgt_config.json.XnB 00:11:03.370 + exit 1 00:11:03.370 INFO: configuration change detected. 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:11:03.370 15:04:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.370 15:04:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@321 -- # [[ -n 81414 ]] 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:11:03.370 15:04:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.370 15:04:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:11:03.370 15:04:58 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:11:03.370 15:04:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:11:03.628 15:04:58 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:11:03.628 15:04:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:11:03.886 15:04:59 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:11:03.886 15:04:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:11:04.145 15:04:59 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:11:04.145 15:04:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:11:04.403 15:04:59 json_config -- json_config/json_config.sh@197 -- # uname -s 00:11:04.403 15:04:59 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:11:04.403 15:04:59 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:11:04.403 15:04:59 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:11:04.403 15:04:59 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:04.403 15:04:59 json_config -- json_config/json_config.sh@327 -- # killprocess 81414 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@948 -- # '[' -z 81414 ']' 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@952 -- # kill -0 81414 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@953 -- # uname 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81414 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:04.403 killing process with pid 81414 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81414' 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@967 -- # kill 81414 00:11:04.403 15:04:59 json_config -- common/autotest_common.sh@972 -- # wait 81414 00:11:04.661 15:05:00 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:04.662 15:05:00 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:11:04.662 15:05:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.662 15:05:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:04.662 INFO: Success 00:11:04.662 15:05:00 json_config -- json_config/json_config.sh@332 -- # return 0 00:11:04.662 15:05:00 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:11:04.662 00:11:04.662 real 0m11.651s 00:11:04.662 user 0m17.582s 00:11:04.662 sys 0m2.648s 00:11:04.662 15:05:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.662 ************************************ 00:11:04.662 END TEST json_config 00:11:04.662 ************************************ 00:11:04.662 15:05:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:04.920 15:05:00 -- common/autotest_common.sh@1142 -- # return 0 00:11:04.920 15:05:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:04.920 15:05:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:04.920 15:05:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.920 15:05:00 -- common/autotest_common.sh@10 -- # set +x 00:11:04.920 ************************************ 00:11:04.920 START TEST json_config_extra_key 00:11:04.920 ************************************ 00:11:04.920 15:05:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:04.920 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.920 15:05:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:db4a2233-2afc-4dde-b9ec-9e18d94548e8 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=db4a2233-2afc-4dde-b9ec-9e18d94548e8 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.921 15:05:00 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.921 15:05:00 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.921 15:05:00 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.921 15:05:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:04.921 15:05:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:04.921 15:05:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:04.921 15:05:00 json_config_extra_key -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:04.921 15:05:00 json_config_extra_key -- paths/export.sh@6 -- # export PATH 00:11:04.921 15:05:00 json_config_extra_key -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.921 15:05:00 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:04.921 INFO: launching applications... 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:04.921 15:05:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=81565 00:11:04.921 Waiting for target to run... 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 81565 /var/tmp/spdk_tgt.sock 00:11:04.921 15:05:00 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 81565 ']' 00:11:04.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:04.921 15:05:00 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:04.921 15:05:00 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.921 15:05:00 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:04.921 15:05:00 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.921 15:05:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:04.921 15:05:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:04.921 [2024-07-23 15:05:00.292849] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:04.921 [2024-07-23 15:05:00.293015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81565 ] 00:11:05.487 [2024-07-23 15:05:00.690885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.487 [2024-07-23 15:05:00.742948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.746 15:05:01 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.746 00:11:05.746 INFO: shutting down applications... 00:11:05.746 15:05:01 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:05.746 15:05:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:05.746 15:05:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 81565 ]] 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 81565 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 81565 00:11:05.746 15:05:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:06.313 15:05:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:06.313 15:05:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:06.313 15:05:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 81565 00:11:06.313 15:05:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:06.313 15:05:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:06.313 SPDK target shutdown done 00:11:06.313 15:05:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:06.313 15:05:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:06.313 Success 00:11:06.313 15:05:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:06.313 00:11:06.313 real 0m1.525s 00:11:06.313 user 0m1.309s 00:11:06.313 sys 0m0.482s 00:11:06.313 15:05:01 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.313 ************************************ 00:11:06.313 END TEST json_config_extra_key 00:11:06.313 ************************************ 00:11:06.313 15:05:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:06.313 15:05:01 -- common/autotest_common.sh@1142 -- # return 0 00:11:06.313 15:05:01 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:06.313 15:05:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:06.313 15:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.313 15:05:01 -- common/autotest_common.sh@10 -- # set +x 00:11:06.313 ************************************ 00:11:06.313 START TEST alias_rpc 00:11:06.313 ************************************ 00:11:06.313 15:05:01 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:06.571 * Looking for test storage... 00:11:06.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:06.571 15:05:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:06.571 15:05:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=81638 00:11:06.571 15:05:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:06.571 15:05:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 81638 00:11:06.571 15:05:01 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 81638 ']' 00:11:06.571 15:05:01 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.571 15:05:01 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.571 15:05:01 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.571 15:05:01 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.571 15:05:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.571 [2024-07-23 15:05:01.902565] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:06.571 [2024-07-23 15:05:01.902867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81638 ] 00:11:06.829 [2024-07-23 15:05:02.049067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.829 [2024-07-23 15:05:02.103431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.394 15:05:02 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.394 15:05:02 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:07.394 15:05:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:07.651 15:05:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 81638 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 81638 ']' 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 81638 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81638 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:07.651 killing process with pid 81638 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81638' 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 81638 00:11:07.651 15:05:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 81638 00:11:08.217 00:11:08.217 real 0m1.779s 00:11:08.217 user 0m1.908s 00:11:08.217 sys 0m0.519s 00:11:08.217 15:05:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.217 ************************************ 00:11:08.217 END TEST alias_rpc 00:11:08.217 ************************************ 00:11:08.217 15:05:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.217 15:05:03 -- common/autotest_common.sh@1142 -- # return 0 00:11:08.217 15:05:03 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:11:08.217 15:05:03 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:08.217 15:05:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:08.217 15:05:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.217 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:11:08.217 ************************************ 00:11:08.217 START TEST spdkcli_tcp 00:11:08.217 ************************************ 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:08.217 * Looking for test storage... 00:11:08.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=81710 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 81710 00:11:08.217 15:05:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 81710 ']' 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.217 15:05:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.475 [2024-07-23 15:05:03.723612] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:08.475 [2024-07-23 15:05:03.723928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81710 ] 00:11:08.475 [2024-07-23 15:05:03.880969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.733 [2024-07-23 15:05:03.943466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.733 [2024-07-23 15:05:03.943521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.299 15:05:04 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.299 15:05:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:11:09.299 15:05:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=81727 00:11:09.299 15:05:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:09.299 15:05:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:09.557 [ 00:11:09.558 "spdk_get_version", 00:11:09.558 "rpc_get_methods", 00:11:09.558 "keyring_get_keys", 00:11:09.558 "trace_get_info", 00:11:09.558 "trace_get_tpoint_group_mask", 00:11:09.558 "trace_disable_tpoint_group", 00:11:09.558 "trace_enable_tpoint_group", 00:11:09.558 "trace_clear_tpoint_mask", 00:11:09.558 "trace_set_tpoint_mask", 00:11:09.558 "framework_get_pci_devices", 00:11:09.558 "framework_get_config", 00:11:09.558 "framework_get_subsystems", 00:11:09.558 "iobuf_get_stats", 00:11:09.558 "iobuf_set_options", 00:11:09.558 "sock_get_default_impl", 00:11:09.558 "sock_set_default_impl", 00:11:09.558 "sock_impl_set_options", 00:11:09.558 "sock_impl_get_options", 00:11:09.558 "vmd_rescan", 00:11:09.558 "vmd_remove_device", 00:11:09.558 "vmd_enable", 00:11:09.558 "accel_get_stats", 00:11:09.558 "accel_set_options", 00:11:09.558 "accel_set_driver", 00:11:09.558 "accel_crypto_key_destroy", 00:11:09.558 "accel_crypto_keys_get", 00:11:09.558 "accel_crypto_key_create", 00:11:09.558 "accel_assign_opc", 00:11:09.558 "accel_get_module_info", 00:11:09.558 "accel_get_opc_assignments", 00:11:09.558 "notify_get_notifications", 00:11:09.558 "notify_get_types", 00:11:09.558 "bdev_get_histogram", 00:11:09.558 "bdev_enable_histogram", 00:11:09.558 "bdev_set_qos_limit", 00:11:09.558 "bdev_set_qd_sampling_period", 00:11:09.558 "bdev_get_bdevs", 00:11:09.558 "bdev_reset_iostat", 00:11:09.558 "bdev_get_iostat", 00:11:09.558 "bdev_examine", 00:11:09.558 "bdev_wait_for_examine", 00:11:09.558 "bdev_set_options", 00:11:09.558 "scsi_get_devices", 00:11:09.558 "thread_set_cpumask", 00:11:09.558 "framework_get_governor", 00:11:09.558 "framework_get_scheduler", 00:11:09.558 "framework_set_scheduler", 00:11:09.558 "framework_get_reactors", 00:11:09.558 "thread_get_io_channels", 00:11:09.558 "thread_get_pollers", 00:11:09.558 "thread_get_stats", 00:11:09.558 "framework_monitor_context_switch", 00:11:09.558 "spdk_kill_instance", 00:11:09.558 "log_enable_timestamps", 00:11:09.558 "log_get_flags", 00:11:09.558 "log_clear_flag", 00:11:09.558 "log_set_flag", 00:11:09.558 "log_get_level", 00:11:09.558 "log_set_level", 00:11:09.558 "log_get_print_level", 00:11:09.558 "log_set_print_level", 00:11:09.558 "framework_enable_cpumask_locks", 00:11:09.558 "framework_disable_cpumask_locks", 00:11:09.558 "framework_wait_init", 00:11:09.558 "framework_start_init", 00:11:09.558 "virtio_blk_create_transport", 00:11:09.558 "virtio_blk_get_transports", 00:11:09.558 "vhost_controller_set_coalescing", 00:11:09.558 "vhost_get_controllers", 00:11:09.558 "vhost_delete_controller", 00:11:09.558 "vhost_create_blk_controller", 00:11:09.558 "vhost_scsi_controller_remove_target", 00:11:09.558 "vhost_scsi_controller_add_target", 00:11:09.558 "vhost_start_scsi_controller", 00:11:09.558 "vhost_create_scsi_controller", 00:11:09.558 "ublk_recover_disk", 00:11:09.558 "ublk_get_disks", 00:11:09.558 "ublk_stop_disk", 00:11:09.558 "ublk_start_disk", 00:11:09.558 "ublk_destroy_target", 00:11:09.558 "ublk_create_target", 00:11:09.558 "nbd_get_disks", 00:11:09.558 "nbd_stop_disk", 00:11:09.558 "nbd_start_disk", 00:11:09.558 "env_dpdk_get_mem_stats", 00:11:09.558 "nvmf_stop_mdns_prr", 00:11:09.558 "nvmf_publish_mdns_prr", 00:11:09.558 "nvmf_subsystem_get_listeners", 00:11:09.558 "nvmf_subsystem_get_qpairs", 00:11:09.558 "nvmf_subsystem_get_controllers", 00:11:09.558 "nvmf_get_stats", 00:11:09.558 "nvmf_get_transports", 00:11:09.558 "nvmf_create_transport", 00:11:09.558 "nvmf_get_targets", 00:11:09.558 "nvmf_delete_target", 00:11:09.558 "nvmf_create_target", 00:11:09.558 "nvmf_subsystem_allow_any_host", 00:11:09.558 "nvmf_subsystem_remove_host", 00:11:09.558 "nvmf_subsystem_add_host", 00:11:09.558 "nvmf_ns_remove_host", 00:11:09.558 "nvmf_ns_add_host", 00:11:09.558 "nvmf_subsystem_remove_ns", 00:11:09.558 "nvmf_subsystem_add_ns", 00:11:09.558 "nvmf_subsystem_listener_set_ana_state", 00:11:09.558 "nvmf_discovery_get_referrals", 00:11:09.558 "nvmf_discovery_remove_referral", 00:11:09.558 "nvmf_discovery_add_referral", 00:11:09.558 "nvmf_subsystem_remove_listener", 00:11:09.558 "nvmf_subsystem_add_listener", 00:11:09.558 "nvmf_delete_subsystem", 00:11:09.558 "nvmf_create_subsystem", 00:11:09.558 "nvmf_get_subsystems", 00:11:09.558 "nvmf_set_crdt", 00:11:09.558 "nvmf_set_config", 00:11:09.558 "nvmf_set_max_subsystems", 00:11:09.558 "iscsi_get_histogram", 00:11:09.558 "iscsi_enable_histogram", 00:11:09.558 "iscsi_set_options", 00:11:09.558 "iscsi_get_auth_groups", 00:11:09.558 "iscsi_auth_group_remove_secret", 00:11:09.558 "iscsi_auth_group_add_secret", 00:11:09.558 "iscsi_delete_auth_group", 00:11:09.558 "iscsi_create_auth_group", 00:11:09.558 "iscsi_set_discovery_auth", 00:11:09.558 "iscsi_get_options", 00:11:09.558 "iscsi_target_node_request_logout", 00:11:09.558 "iscsi_target_node_set_redirect", 00:11:09.558 "iscsi_target_node_set_auth", 00:11:09.558 "iscsi_target_node_add_lun", 00:11:09.558 "iscsi_get_stats", 00:11:09.558 "iscsi_get_connections", 00:11:09.558 "iscsi_portal_group_set_auth", 00:11:09.558 "iscsi_start_portal_group", 00:11:09.558 "iscsi_delete_portal_group", 00:11:09.558 "iscsi_create_portal_group", 00:11:09.558 "iscsi_get_portal_groups", 00:11:09.558 "iscsi_delete_target_node", 00:11:09.558 "iscsi_target_node_remove_pg_ig_maps", 00:11:09.558 "iscsi_target_node_add_pg_ig_maps", 00:11:09.558 "iscsi_create_target_node", 00:11:09.558 "iscsi_get_target_nodes", 00:11:09.558 "iscsi_delete_initiator_group", 00:11:09.558 "iscsi_initiator_group_remove_initiators", 00:11:09.558 "iscsi_initiator_group_add_initiators", 00:11:09.558 "iscsi_create_initiator_group", 00:11:09.558 "iscsi_get_initiator_groups", 00:11:09.558 "keyring_linux_set_options", 00:11:09.558 "keyring_file_remove_key", 00:11:09.558 "keyring_file_add_key", 00:11:09.558 "iaa_scan_accel_module", 00:11:09.558 "dsa_scan_accel_module", 00:11:09.558 "ioat_scan_accel_module", 00:11:09.558 "accel_error_inject_error", 00:11:09.558 "bdev_iscsi_delete", 00:11:09.558 "bdev_iscsi_create", 00:11:09.558 "bdev_iscsi_set_options", 00:11:09.558 "bdev_virtio_attach_controller", 00:11:09.558 "bdev_virtio_scsi_get_devices", 00:11:09.558 "bdev_virtio_detach_controller", 00:11:09.558 "bdev_virtio_blk_set_hotplug", 00:11:09.558 "bdev_ftl_set_property", 00:11:09.558 "bdev_ftl_get_properties", 00:11:09.558 "bdev_ftl_get_stats", 00:11:09.558 "bdev_ftl_unmap", 00:11:09.558 "bdev_ftl_unload", 00:11:09.558 "bdev_ftl_delete", 00:11:09.558 "bdev_ftl_load", 00:11:09.558 "bdev_ftl_create", 00:11:09.558 "bdev_aio_delete", 00:11:09.558 "bdev_aio_rescan", 00:11:09.558 "bdev_aio_create", 00:11:09.558 "blobfs_create", 00:11:09.558 "blobfs_detect", 00:11:09.558 "blobfs_set_cache_size", 00:11:09.558 "bdev_zone_block_delete", 00:11:09.558 "bdev_zone_block_create", 00:11:09.558 "bdev_delay_delete", 00:11:09.558 "bdev_delay_create", 00:11:09.558 "bdev_delay_update_latency", 00:11:09.558 "bdev_split_delete", 00:11:09.558 "bdev_split_create", 00:11:09.558 "bdev_error_inject_error", 00:11:09.558 "bdev_error_delete", 00:11:09.558 "bdev_error_create", 00:11:09.558 "bdev_raid_set_options", 00:11:09.558 "bdev_raid_remove_base_bdev", 00:11:09.558 "bdev_raid_add_base_bdev", 00:11:09.558 "bdev_raid_delete", 00:11:09.558 "bdev_raid_create", 00:11:09.558 "bdev_raid_get_bdevs", 00:11:09.558 "bdev_lvol_set_parent_bdev", 00:11:09.558 "bdev_lvol_set_parent", 00:11:09.558 "bdev_lvol_check_shallow_copy", 00:11:09.558 "bdev_lvol_start_shallow_copy", 00:11:09.558 "bdev_lvol_grow_lvstore", 00:11:09.558 "bdev_lvol_get_lvols", 00:11:09.558 "bdev_lvol_get_lvstores", 00:11:09.558 "bdev_lvol_delete", 00:11:09.558 "bdev_lvol_set_read_only", 00:11:09.558 "bdev_lvol_resize", 00:11:09.558 "bdev_lvol_decouple_parent", 00:11:09.558 "bdev_lvol_inflate", 00:11:09.558 "bdev_lvol_rename", 00:11:09.558 "bdev_lvol_clone_bdev", 00:11:09.558 "bdev_lvol_clone", 00:11:09.558 "bdev_lvol_snapshot", 00:11:09.558 "bdev_lvol_create", 00:11:09.558 "bdev_lvol_delete_lvstore", 00:11:09.558 "bdev_lvol_rename_lvstore", 00:11:09.558 "bdev_lvol_create_lvstore", 00:11:09.558 "bdev_passthru_delete", 00:11:09.558 "bdev_passthru_create", 00:11:09.558 "bdev_nvme_cuse_unregister", 00:11:09.558 "bdev_nvme_cuse_register", 00:11:09.558 "bdev_opal_new_user", 00:11:09.558 "bdev_opal_set_lock_state", 00:11:09.558 "bdev_opal_delete", 00:11:09.558 "bdev_opal_get_info", 00:11:09.558 "bdev_opal_create", 00:11:09.558 "bdev_nvme_opal_revert", 00:11:09.558 "bdev_nvme_opal_init", 00:11:09.558 "bdev_nvme_send_cmd", 00:11:09.558 "bdev_nvme_get_path_iostat", 00:11:09.558 "bdev_nvme_get_mdns_discovery_info", 00:11:09.558 "bdev_nvme_stop_mdns_discovery", 00:11:09.558 "bdev_nvme_start_mdns_discovery", 00:11:09.558 "bdev_nvme_set_multipath_policy", 00:11:09.558 "bdev_nvme_set_preferred_path", 00:11:09.558 "bdev_nvme_get_io_paths", 00:11:09.558 "bdev_nvme_remove_error_injection", 00:11:09.558 "bdev_nvme_add_error_injection", 00:11:09.558 "bdev_nvme_get_discovery_info", 00:11:09.558 "bdev_nvme_stop_discovery", 00:11:09.558 "bdev_nvme_start_discovery", 00:11:09.559 "bdev_nvme_get_controller_health_info", 00:11:09.559 "bdev_nvme_disable_controller", 00:11:09.559 "bdev_nvme_enable_controller", 00:11:09.559 "bdev_nvme_reset_controller", 00:11:09.559 "bdev_nvme_get_transport_statistics", 00:11:09.559 "bdev_nvme_apply_firmware", 00:11:09.559 "bdev_nvme_detach_controller", 00:11:09.559 "bdev_nvme_get_controllers", 00:11:09.559 "bdev_nvme_attach_controller", 00:11:09.559 "bdev_nvme_set_hotplug", 00:11:09.559 "bdev_nvme_set_options", 00:11:09.559 "bdev_null_resize", 00:11:09.559 "bdev_null_delete", 00:11:09.559 "bdev_null_create", 00:11:09.559 "bdev_malloc_delete", 00:11:09.559 "bdev_malloc_create" 00:11:09.559 ] 00:11:09.559 15:05:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.559 15:05:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:09.559 15:05:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 81710 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 81710 ']' 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 81710 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81710 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.559 killing process with pid 81710 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81710' 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 81710 00:11:09.559 15:05:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 81710 00:11:10.125 00:11:10.125 real 0m1.836s 00:11:10.125 user 0m3.244s 00:11:10.125 sys 0m0.582s 00:11:10.125 15:05:05 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.125 15:05:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.125 ************************************ 00:11:10.125 END TEST spdkcli_tcp 00:11:10.125 ************************************ 00:11:10.125 15:05:05 -- common/autotest_common.sh@1142 -- # return 0 00:11:10.125 15:05:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:10.125 15:05:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:10.125 15:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.125 15:05:05 -- common/autotest_common.sh@10 -- # set +x 00:11:10.125 ************************************ 00:11:10.125 START TEST dpdk_mem_utility 00:11:10.125 ************************************ 00:11:10.125 15:05:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:10.125 * Looking for test storage... 00:11:10.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:10.125 15:05:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:10.125 15:05:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=81796 00:11:10.125 15:05:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 81796 00:11:10.125 15:05:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 81796 ']' 00:11:10.125 15:05:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.125 15:05:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.125 15:05:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:10.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.125 15:05:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.125 15:05:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.125 15:05:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:10.383 [2024-07-23 15:05:05.568595] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:10.383 [2024-07-23 15:05:05.569308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81796 ] 00:11:10.383 [2024-07-23 15:05:05.717592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.383 [2024-07-23 15:05:05.771607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.317 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.317 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:11:11.317 15:05:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:11.317 15:05:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:11.317 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.317 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:11.317 { 00:11:11.317 "filename": "/tmp/spdk_mem_dump.txt" 00:11:11.317 } 00:11:11.317 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.317 15:05:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:11.317 DPDK memory size 814.000000 MiB in 1 heap(s) 00:11:11.317 1 heaps totaling size 814.000000 MiB 00:11:11.317 size: 814.000000 MiB heap id: 0 00:11:11.317 end heaps---------- 00:11:11.317 8 mempools totaling size 598.116089 MiB 00:11:11.317 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:11.317 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:11.317 size: 84.521057 MiB name: bdev_io_81796 00:11:11.317 size: 51.011292 MiB name: evtpool_81796 00:11:11.317 size: 50.003479 MiB name: msgpool_81796 00:11:11.317 size: 21.763794 MiB name: PDU_Pool 00:11:11.317 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:11.317 size: 0.026123 MiB name: Session_Pool 00:11:11.317 end mempools------- 00:11:11.317 6 memzones totaling size 4.142822 MiB 00:11:11.317 size: 1.000366 MiB name: RG_ring_0_81796 00:11:11.317 size: 1.000366 MiB name: RG_ring_1_81796 00:11:11.317 size: 1.000366 MiB name: RG_ring_4_81796 00:11:11.317 size: 1.000366 MiB name: RG_ring_5_81796 00:11:11.317 size: 0.125366 MiB name: RG_ring_2_81796 00:11:11.317 size: 0.015991 MiB name: RG_ring_3_81796 00:11:11.317 end memzones------- 00:11:11.317 15:05:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:11.317 heap id: 0 total size: 814.000000 MiB number of busy elements: 306 number of free elements: 15 00:11:11.317 list of free elements. size: 12.470825 MiB 00:11:11.317 element at address: 0x200000400000 with size: 1.999512 MiB 00:11:11.317 element at address: 0x200018e00000 with size: 0.999878 MiB 00:11:11.317 element at address: 0x200019000000 with size: 0.999878 MiB 00:11:11.317 element at address: 0x200003e00000 with size: 0.996277 MiB 00:11:11.317 element at address: 0x200031c00000 with size: 0.994446 MiB 00:11:11.317 element at address: 0x200013800000 with size: 0.978699 MiB 00:11:11.317 element at address: 0x200007000000 with size: 0.959839 MiB 00:11:11.317 element at address: 0x200019200000 with size: 0.936584 MiB 00:11:11.317 element at address: 0x200000200000 with size: 0.833191 MiB 00:11:11.317 element at address: 0x20001aa00000 with size: 0.567505 MiB 00:11:11.317 element at address: 0x20000b200000 with size: 0.489624 MiB 00:11:11.317 element at address: 0x200000800000 with size: 0.486145 MiB 00:11:11.317 element at address: 0x200019400000 with size: 0.485657 MiB 00:11:11.317 element at address: 0x200027e00000 with size: 0.395752 MiB 00:11:11.317 element at address: 0x200003a00000 with size: 0.347839 MiB 00:11:11.317 list of standard malloc elements. size: 199.266602 MiB 00:11:11.317 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:11:11.317 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:11:11.317 element at address: 0x200018efff80 with size: 1.000122 MiB 00:11:11.317 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:11:11.317 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:11.317 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:11.317 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:11:11.317 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:11.317 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:11:11.317 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:11:11.317 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087c740 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087c800 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087c980 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59180 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59240 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59300 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59480 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59540 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59600 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59780 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59840 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59900 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003adb300 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003adb500 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003affa80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003affb40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:11:11.318 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:11:11.319 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e65500 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:11:11.319 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:11:11.319 list of memzone associated elements. size: 602.262573 MiB 00:11:11.319 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:11:11.319 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:11.319 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:11:11.319 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:11.319 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:11:11.319 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_81796_0 00:11:11.319 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:11:11.319 associated memzone info: size: 48.002930 MiB name: MP_evtpool_81796_0 00:11:11.319 element at address: 0x200003fff380 with size: 48.003052 MiB 00:11:11.319 associated memzone info: size: 48.002930 MiB name: MP_msgpool_81796_0 00:11:11.319 element at address: 0x2000195be940 with size: 20.255554 MiB 00:11:11.319 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:11.319 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:11:11.319 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:11.319 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:11:11.319 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_81796 00:11:11.319 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:11:11.320 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_81796 00:11:11.320 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:11.320 associated memzone info: size: 1.007996 MiB name: MP_evtpool_81796 00:11:11.320 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:11:11.320 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:11.320 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:11:11.320 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:11.320 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:11:11.320 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:11.320 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:11:11.320 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:11.320 element at address: 0x200003eff180 with size: 1.000488 MiB 00:11:11.320 associated memzone info: size: 1.000366 MiB name: RG_ring_0_81796 00:11:11.320 element at address: 0x200003affc00 with size: 1.000488 MiB 00:11:11.320 associated memzone info: size: 1.000366 MiB name: RG_ring_1_81796 00:11:11.320 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:11:11.320 associated memzone info: size: 1.000366 MiB name: RG_ring_4_81796 00:11:11.320 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:11:11.320 associated memzone info: size: 1.000366 MiB name: RG_ring_5_81796 00:11:11.320 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:11:11.320 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_81796 00:11:11.320 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:11:11.320 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:11.320 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:11:11.320 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:11.320 element at address: 0x20001947c540 with size: 0.250488 MiB 00:11:11.320 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:11.320 element at address: 0x200003adf880 with size: 0.125488 MiB 00:11:11.320 associated memzone info: size: 0.125366 MiB name: RG_ring_2_81796 00:11:11.320 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:11:11.320 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:11.320 element at address: 0x200027e65680 with size: 0.023743 MiB 00:11:11.320 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:11.320 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:11:11.320 associated memzone info: size: 0.015991 MiB name: RG_ring_3_81796 00:11:11.320 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:11:11.320 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:11.320 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:11:11.320 associated memzone info: size: 0.000183 MiB name: MP_msgpool_81796 00:11:11.320 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:11:11.320 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_81796 00:11:11.320 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:11:11.320 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:11.320 15:05:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:11.320 15:05:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 81796 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 81796 ']' 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 81796 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81796 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:11.320 killing process with pid 81796 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81796' 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 81796 00:11:11.320 15:05:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 81796 00:11:11.887 00:11:11.887 real 0m1.671s 00:11:11.887 user 0m1.767s 00:11:11.887 sys 0m0.475s 00:11:11.887 15:05:07 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.887 15:05:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:11.887 ************************************ 00:11:11.887 END TEST dpdk_mem_utility 00:11:11.887 ************************************ 00:11:11.887 15:05:07 -- common/autotest_common.sh@1142 -- # return 0 00:11:11.887 15:05:07 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:11.887 15:05:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:11.887 15:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.887 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:11:11.887 ************************************ 00:11:11.887 START TEST event 00:11:11.887 ************************************ 00:11:11.887 15:05:07 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:11.887 * Looking for test storage... 00:11:11.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:11.887 15:05:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:11.887 15:05:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:11.887 15:05:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:11.887 15:05:07 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:11.887 15:05:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.887 15:05:07 event -- common/autotest_common.sh@10 -- # set +x 00:11:11.887 ************************************ 00:11:11.887 START TEST event_perf 00:11:11.887 ************************************ 00:11:11.887 15:05:07 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:11.887 Running I/O for 1 seconds...[2024-07-23 15:05:07.281174] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:11.887 [2024-07-23 15:05:07.281493] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81874 ] 00:11:12.147 [2024-07-23 15:05:07.439592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.147 [2024-07-23 15:05:07.498443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.147 [2024-07-23 15:05:07.498335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.147 [2024-07-23 15:05:07.498336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.147 Running I/O for 1 seconds...[2024-07-23 15:05:07.498621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.520 00:11:13.520 lcore 0: 172711 00:11:13.520 lcore 1: 172709 00:11:13.520 lcore 2: 172711 00:11:13.520 lcore 3: 172712 00:11:13.520 done. 00:11:13.520 00:11:13.520 real 0m1.358s 00:11:13.520 user 0m4.137s 00:11:13.520 sys 0m0.121s 00:11:13.520 15:05:08 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.520 15:05:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:13.520 ************************************ 00:11:13.520 END TEST event_perf 00:11:13.520 ************************************ 00:11:13.520 15:05:08 event -- common/autotest_common.sh@1142 -- # return 0 00:11:13.520 15:05:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:13.520 15:05:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:13.520 15:05:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.520 15:05:08 event -- common/autotest_common.sh@10 -- # set +x 00:11:13.520 ************************************ 00:11:13.520 START TEST event_reactor 00:11:13.520 ************************************ 00:11:13.520 15:05:08 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:13.520 [2024-07-23 15:05:08.680086] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:13.520 [2024-07-23 15:05:08.680306] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81914 ] 00:11:13.520 [2024-07-23 15:05:08.826588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.520 [2024-07-23 15:05:08.881053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.896 test_start 00:11:14.896 oneshot 00:11:14.896 tick 100 00:11:14.896 tick 100 00:11:14.896 tick 250 00:11:14.896 tick 100 00:11:14.896 tick 100 00:11:14.896 tick 100 00:11:14.896 tick 250 00:11:14.896 tick 500 00:11:14.896 tick 100 00:11:14.896 tick 100 00:11:14.896 tick 250 00:11:14.896 tick 100 00:11:14.896 tick 100 00:11:14.896 test_end 00:11:14.896 00:11:14.896 real 0m1.331s 00:11:14.896 user 0m1.137s 00:11:14.896 sys 0m0.093s 00:11:14.896 15:05:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.896 15:05:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:14.896 ************************************ 00:11:14.896 END TEST event_reactor 00:11:14.896 ************************************ 00:11:14.896 15:05:10 event -- common/autotest_common.sh@1142 -- # return 0 00:11:14.896 15:05:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:14.896 15:05:10 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:14.896 15:05:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.896 15:05:10 event -- common/autotest_common.sh@10 -- # set +x 00:11:14.896 ************************************ 00:11:14.896 START TEST event_reactor_perf 00:11:14.896 ************************************ 00:11:14.896 15:05:10 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:14.896 [2024-07-23 15:05:10.072922] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:14.896 [2024-07-23 15:05:10.073084] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81945 ] 00:11:14.896 [2024-07-23 15:05:10.219469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.896 [2024-07-23 15:05:10.272394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.269 test_start 00:11:16.269 test_end 00:11:16.269 Performance: 323594 events per second 00:11:16.269 00:11:16.269 real 0m1.334s 00:11:16.269 user 0m1.140s 00:11:16.269 sys 0m0.093s 00:11:16.269 15:05:11 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.269 15:05:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:16.269 ************************************ 00:11:16.269 END TEST event_reactor_perf 00:11:16.269 ************************************ 00:11:16.269 15:05:11 event -- common/autotest_common.sh@1142 -- # return 0 00:11:16.269 15:05:11 event -- event/event.sh@49 -- # uname -s 00:11:16.269 15:05:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:16.269 15:05:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:16.269 15:05:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:16.269 15:05:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.269 15:05:11 event -- common/autotest_common.sh@10 -- # set +x 00:11:16.269 ************************************ 00:11:16.269 START TEST event_scheduler 00:11:16.269 ************************************ 00:11:16.269 15:05:11 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:16.269 * Looking for test storage... 00:11:16.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:16.269 15:05:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:16.269 15:05:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=82006 00:11:16.269 15:05:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:16.269 15:05:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:16.269 15:05:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 82006 00:11:16.269 15:05:11 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 82006 ']' 00:11:16.269 15:05:11 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.269 15:05:11 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.269 15:05:11 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.269 15:05:11 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.269 15:05:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:16.269 [2024-07-23 15:05:11.596458] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:16.269 [2024-07-23 15:05:11.596625] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82006 ] 00:11:16.527 [2024-07-23 15:05:11.750307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.527 [2024-07-23 15:05:11.844241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.527 [2024-07-23 15:05:11.844511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.527 [2024-07-23 15:05:11.844545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.527 [2024-07-23 15:05:11.844583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.461 15:05:12 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.461 15:05:12 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:11:17.461 15:05:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:17.461 15:05:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.461 15:05:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:17.461 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:17.461 POWER: Cannot set governor of lcore 0 to userspace 00:11:17.461 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:17.461 POWER: Cannot set governor of lcore 0 to performance 00:11:17.461 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:17.461 POWER: Cannot set governor of lcore 0 to userspace 00:11:17.461 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:17.461 POWER: Unable to set Power Management Environment for lcore 0 00:11:17.461 [2024-07-23 15:05:12.636236] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:11:17.461 [2024-07-23 15:05:12.636294] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:11:17.461 [2024-07-23 15:05:12.636339] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:11:17.461 [2024-07-23 15:05:12.636384] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:17.461 [2024-07-23 15:05:12.636418] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:17.461 [2024-07-23 15:05:12.636439] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:17.461 15:05:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.461 15:05:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:17.461 15:05:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.461 15:05:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:17.461 [2024-07-23 15:05:12.714095] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:17.461 15:05:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:17.462 15:05:12 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:17.462 15:05:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 ************************************ 00:11:17.462 START TEST scheduler_create_thread 00:11:17.462 ************************************ 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 2 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 3 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 4 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 5 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 6 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 7 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 8 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 9 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 10 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.462 15:05:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:18.864 15:05:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.864 15:05:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:18.864 15:05:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:18.864 15:05:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.864 15:05:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:20.238 15:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.238 00:11:20.238 real 0m2.613s 00:11:20.238 user 0m0.017s 00:11:20.238 sys 0m0.007s 00:11:20.238 15:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.238 ************************************ 00:11:20.238 END TEST scheduler_create_thread 00:11:20.238 ************************************ 00:11:20.238 15:05:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:11:20.238 15:05:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:20.238 15:05:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 82006 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 82006 ']' 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 82006 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82006 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:11:20.238 killing process with pid 82006 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82006' 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 82006 00:11:20.238 15:05:15 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 82006 00:11:20.496 [2024-07-23 15:05:15.819316] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:20.753 00:11:20.753 real 0m4.668s 00:11:20.753 user 0m8.879s 00:11:20.754 sys 0m0.493s 00:11:20.754 15:05:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.754 15:05:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:20.754 ************************************ 00:11:20.754 END TEST event_scheduler 00:11:20.754 ************************************ 00:11:20.754 15:05:16 event -- common/autotest_common.sh@1142 -- # return 0 00:11:20.754 15:05:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:20.754 15:05:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:20.754 15:05:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:20.754 15:05:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.754 15:05:16 event -- common/autotest_common.sh@10 -- # set +x 00:11:20.754 ************************************ 00:11:20.754 START TEST app_repeat 00:11:20.754 ************************************ 00:11:20.754 15:05:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=82108 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:20.754 Process app_repeat pid: 82108 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 82108' 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:20.754 spdk_app_start Round 0 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:20.754 15:05:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 82108 /var/tmp/spdk-nbd.sock 00:11:20.754 15:05:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 82108 ']' 00:11:20.754 15:05:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:20.754 15:05:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:20.754 15:05:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:20.754 15:05:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.754 15:05:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:21.012 [2024-07-23 15:05:16.224126] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:21.012 [2024-07-23 15:05:16.225078] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82108 ] 00:11:21.012 [2024-07-23 15:05:16.387509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:21.270 [2024-07-23 15:05:16.448325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.270 [2024-07-23 15:05:16.448400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.836 15:05:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.836 15:05:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:11:21.836 15:05:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:22.094 Malloc0 00:11:22.094 15:05:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:22.353 Malloc1 00:11:22.353 15:05:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.353 15:05:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:22.611 /dev/nbd0 00:11:22.611 15:05:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:22.611 15:05:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:22.611 1+0 records in 00:11:22.611 1+0 records out 00:11:22.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329801 s, 12.4 MB/s 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:22.611 15:05:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:11:22.611 15:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.611 15:05:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.611 15:05:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:22.868 /dev/nbd1 00:11:22.868 15:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:22.868 15:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:22.868 1+0 records in 00:11:22.868 1+0 records out 00:11:22.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292954 s, 14.0 MB/s 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:22.868 15:05:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:11:22.868 15:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:22.869 15:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:22.869 15:05:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:22.869 15:05:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.869 15:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:23.126 { 00:11:23.126 "nbd_device": "/dev/nbd0", 00:11:23.126 "bdev_name": "Malloc0" 00:11:23.126 }, 00:11:23.126 { 00:11:23.126 "nbd_device": "/dev/nbd1", 00:11:23.126 "bdev_name": "Malloc1" 00:11:23.126 } 00:11:23.126 ]' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:23.126 { 00:11:23.126 "nbd_device": "/dev/nbd0", 00:11:23.126 "bdev_name": "Malloc0" 00:11:23.126 }, 00:11:23.126 { 00:11:23.126 "nbd_device": "/dev/nbd1", 00:11:23.126 "bdev_name": "Malloc1" 00:11:23.126 } 00:11:23.126 ]' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:23.126 /dev/nbd1' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:23.126 /dev/nbd1' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:23.126 256+0 records in 00:11:23.126 256+0 records out 00:11:23.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526623 s, 199 MB/s 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:23.126 256+0 records in 00:11:23.126 256+0 records out 00:11:23.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290015 s, 36.2 MB/s 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:23.126 256+0 records in 00:11:23.126 256+0 records out 00:11:23.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325815 s, 32.2 MB/s 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.126 15:05:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.384 15:05:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.642 15:05:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:23.900 15:05:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:23.900 15:05:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:24.159 15:05:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:24.417 [2024-07-23 15:05:19.728696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:24.417 [2024-07-23 15:05:19.782555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.417 [2024-07-23 15:05:19.782555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.417 [2024-07-23 15:05:19.829065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:24.417 [2024-07-23 15:05:19.829178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:27.699 15:05:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:27.700 spdk_app_start Round 1 00:11:27.700 15:05:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:27.700 15:05:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 82108 /var/tmp/spdk-nbd.sock 00:11:27.700 15:05:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 82108 ']' 00:11:27.700 15:05:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:27.700 15:05:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:27.700 15:05:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:27.700 15:05:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.700 15:05:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:27.700 15:05:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.700 15:05:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:11:27.700 15:05:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:27.700 Malloc0 00:11:27.700 15:05:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:27.958 Malloc1 00:11:27.958 15:05:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:27.958 /dev/nbd0 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:27.958 1+0 records in 00:11:27.958 1+0 records out 00:11:27.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315408 s, 13.0 MB/s 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.958 15:05:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:27.958 15:05:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:28.215 /dev/nbd1 00:11:28.215 15:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:28.215 15:05:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:28.215 1+0 records in 00:11:28.215 1+0 records out 00:11:28.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265854 s, 15.4 MB/s 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.215 15:05:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:11:28.215 15:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.215 15:05:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.215 15:05:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:28.215 15:05:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.215 15:05:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:28.472 { 00:11:28.472 "nbd_device": "/dev/nbd0", 00:11:28.472 "bdev_name": "Malloc0" 00:11:28.472 }, 00:11:28.472 { 00:11:28.472 "nbd_device": "/dev/nbd1", 00:11:28.472 "bdev_name": "Malloc1" 00:11:28.472 } 00:11:28.472 ]' 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:28.472 { 00:11:28.472 "nbd_device": "/dev/nbd0", 00:11:28.472 "bdev_name": "Malloc0" 00:11:28.472 }, 00:11:28.472 { 00:11:28.472 "nbd_device": "/dev/nbd1", 00:11:28.472 "bdev_name": "Malloc1" 00:11:28.472 } 00:11:28.472 ]' 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:28.472 /dev/nbd1' 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:28.472 /dev/nbd1' 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:28.472 256+0 records in 00:11:28.472 256+0 records out 00:11:28.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537469 s, 195 MB/s 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:28.472 256+0 records in 00:11:28.472 256+0 records out 00:11:28.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296288 s, 35.4 MB/s 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.472 15:05:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:28.729 256+0 records in 00:11:28.729 256+0 records out 00:11:28.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022541 s, 46.5 MB/s 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:28.729 15:05:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:28.730 15:05:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:28.730 15:05:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.730 15:05:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.730 15:05:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:28.730 15:05:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:28.730 15:05:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.730 15:05:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.730 15:05:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.987 15:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:29.244 15:05:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:29.244 15:05:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:29.513 15:05:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:29.812 [2024-07-23 15:05:25.081488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:29.812 [2024-07-23 15:05:25.127900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.812 [2024-07-23 15:05:25.127909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.812 [2024-07-23 15:05:25.171737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:29.812 [2024-07-23 15:05:25.171822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:33.095 spdk_app_start Round 2 00:11:33.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:33.096 15:05:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:33.096 15:05:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:33.096 15:05:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 82108 /var/tmp/spdk-nbd.sock 00:11:33.096 15:05:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 82108 ']' 00:11:33.096 15:05:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:33.096 15:05:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.096 15:05:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:33.096 15:05:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.096 15:05:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:33.096 15:05:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.096 15:05:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:11:33.096 15:05:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:33.096 Malloc0 00:11:33.096 15:05:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:33.354 Malloc1 00:11:33.354 15:05:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:33.354 15:05:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:33.612 /dev/nbd0 00:11:33.612 15:05:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:33.612 15:05:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:33.612 1+0 records in 00:11:33.612 1+0 records out 00:11:33.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00267541 s, 1.5 MB/s 00:11:33.612 15:05:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:33.613 15:05:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:11:33.613 15:05:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:33.613 15:05:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:33.613 15:05:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:11:33.613 15:05:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.613 15:05:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:33.613 15:05:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:33.871 /dev/nbd1 00:11:33.871 15:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:33.871 15:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:33.871 1+0 records in 00:11:33.871 1+0 records out 00:11:33.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028579 s, 14.3 MB/s 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:33.871 15:05:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:11:33.871 15:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.871 15:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:33.871 15:05:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:33.871 15:05:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:33.871 15:05:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:34.130 { 00:11:34.130 "nbd_device": "/dev/nbd0", 00:11:34.130 "bdev_name": "Malloc0" 00:11:34.130 }, 00:11:34.130 { 00:11:34.130 "nbd_device": "/dev/nbd1", 00:11:34.130 "bdev_name": "Malloc1" 00:11:34.130 } 00:11:34.130 ]' 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:34.130 { 00:11:34.130 "nbd_device": "/dev/nbd0", 00:11:34.130 "bdev_name": "Malloc0" 00:11:34.130 }, 00:11:34.130 { 00:11:34.130 "nbd_device": "/dev/nbd1", 00:11:34.130 "bdev_name": "Malloc1" 00:11:34.130 } 00:11:34.130 ]' 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:34.130 /dev/nbd1' 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:34.130 /dev/nbd1' 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:34.130 256+0 records in 00:11:34.130 256+0 records out 00:11:34.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00889157 s, 118 MB/s 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:34.130 256+0 records in 00:11:34.130 256+0 records out 00:11:34.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229654 s, 45.7 MB/s 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.130 15:05:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:34.389 256+0 records in 00:11:34.389 256+0 records out 00:11:34.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330077 s, 31.8 MB/s 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.389 15:05:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.647 15:05:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:34.906 15:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:34.906 15:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:34.906 15:05:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:34.907 15:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.907 15:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.907 15:05:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:34.907 15:05:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:34.907 15:05:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.907 15:05:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:34.907 15:05:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.907 15:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:35.165 15:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:35.166 15:05:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:35.166 15:05:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:35.424 15:05:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:35.683 [2024-07-23 15:05:30.872781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:35.683 [2024-07-23 15:05:30.928048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.683 [2024-07-23 15:05:30.928049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.683 [2024-07-23 15:05:30.974483] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:35.683 [2024-07-23 15:05:30.974579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:38.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:38.970 15:05:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 82108 /var/tmp/spdk-nbd.sock 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 82108 ']' 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:11:38.970 15:05:33 event.app_repeat -- event/event.sh@39 -- # killprocess 82108 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 82108 ']' 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 82108 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82108 00:11:38.970 killing process with pid 82108 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82108' 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@967 -- # kill 82108 00:11:38.970 15:05:33 event.app_repeat -- common/autotest_common.sh@972 -- # wait 82108 00:11:38.970 spdk_app_start is called in Round 0. 00:11:38.970 Shutdown signal received, stop current app iteration 00:11:38.970 Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 reinitialization... 00:11:38.970 spdk_app_start is called in Round 1. 00:11:38.970 Shutdown signal received, stop current app iteration 00:11:38.970 Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 reinitialization... 00:11:38.970 spdk_app_start is called in Round 2. 00:11:38.970 Shutdown signal received, stop current app iteration 00:11:38.970 Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 reinitialization... 00:11:38.970 spdk_app_start is called in Round 3. 00:11:38.970 Shutdown signal received, stop current app iteration 00:11:38.970 15:05:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:38.970 15:05:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:38.970 00:11:38.970 real 0m18.015s 00:11:38.970 user 0m39.723s 00:11:38.970 sys 0m3.149s 00:11:38.970 15:05:34 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.970 15:05:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:38.970 ************************************ 00:11:38.970 END TEST app_repeat 00:11:38.970 ************************************ 00:11:38.970 15:05:34 event -- common/autotest_common.sh@1142 -- # return 0 00:11:38.970 15:05:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:38.970 15:05:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:38.970 15:05:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:38.970 15:05:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.970 15:05:34 event -- common/autotest_common.sh@10 -- # set +x 00:11:38.970 ************************************ 00:11:38.970 START TEST cpu_locks 00:11:38.970 ************************************ 00:11:38.970 15:05:34 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:38.970 * Looking for test storage... 00:11:38.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:38.970 15:05:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:38.970 15:05:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:38.970 15:05:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:38.970 15:05:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:38.970 15:05:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:38.970 15:05:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.970 15:05:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:38.970 ************************************ 00:11:38.970 START TEST default_locks 00:11:38.970 ************************************ 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=82575 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 82575 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 82575 ']' 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.970 15:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:39.229 [2024-07-23 15:05:34.423115] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:39.229 [2024-07-23 15:05:34.423336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82575 ] 00:11:39.229 [2024-07-23 15:05:34.576639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.229 [2024-07-23 15:05:34.627755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.162 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.162 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:11:40.162 15:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 82575 00:11:40.162 15:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 82575 00:11:40.162 15:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 82575 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 82575 ']' 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 82575 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82575 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:40.421 killing process with pid 82575 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82575' 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 82575 00:11:40.421 15:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 82575 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 82575 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 82575 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 82575 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 82575 ']' 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:40.987 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (82575) - No such process 00:11:40.987 ERROR: process (pid: 82575) is no longer running 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:40.987 00:11:40.987 real 0m1.899s 00:11:40.987 user 0m2.021s 00:11:40.987 sys 0m0.625s 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.987 15:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:40.987 ************************************ 00:11:40.987 END TEST default_locks 00:11:40.987 ************************************ 00:11:40.987 15:05:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:40.987 15:05:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:40.987 15:05:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:40.987 15:05:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.987 15:05:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:40.987 ************************************ 00:11:40.987 START TEST default_locks_via_rpc 00:11:40.987 ************************************ 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=82622 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 82622 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 82622 ']' 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.988 15:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.988 [2024-07-23 15:05:36.376586] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:40.988 [2024-07-23 15:05:36.377374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82622 ] 00:11:41.246 [2024-07-23 15:05:36.529933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.246 [2024-07-23 15:05:36.580084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.181 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.181 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:42.181 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:42.181 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.181 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.181 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.181 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:42.181 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 82622 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 82622 00:11:42.182 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 82622 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 82622 ']' 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 82622 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82622 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:42.749 killing process with pid 82622 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82622' 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 82622 00:11:42.749 15:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 82622 00:11:43.008 00:11:43.008 real 0m2.033s 00:11:43.008 user 0m2.163s 00:11:43.008 sys 0m0.711s 00:11:43.008 15:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.008 15:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.008 ************************************ 00:11:43.008 END TEST default_locks_via_rpc 00:11:43.008 ************************************ 00:11:43.008 15:05:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:43.008 15:05:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:43.008 15:05:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:43.008 15:05:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.008 15:05:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:43.008 ************************************ 00:11:43.008 START TEST non_locking_app_on_locked_coremask 00:11:43.008 ************************************ 00:11:43.008 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=82680 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 82680 /var/tmp/spdk.sock 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 82680 ']' 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.009 15:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:43.267 [2024-07-23 15:05:38.477748] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:43.267 [2024-07-23 15:05:38.478032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82680 ] 00:11:43.267 [2024-07-23 15:05:38.630873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.267 [2024-07-23 15:05:38.679895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=82691 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 82691 /var/tmp/spdk2.sock 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 82691 ']' 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.240 15:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.240 [2024-07-23 15:05:39.436308] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:44.240 [2024-07-23 15:05:39.437008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82691 ] 00:11:44.240 [2024-07-23 15:05:39.584663] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:44.240 [2024-07-23 15:05:39.584736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.499 [2024-07-23 15:05:39.684576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.065 15:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.065 15:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:45.065 15:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 82680 00:11:45.065 15:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 82680 00:11:45.065 15:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 82680 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 82680 ']' 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 82680 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82680 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.438 killing process with pid 82680 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82680' 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 82680 00:11:46.438 15:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 82680 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 82691 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 82691 ']' 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 82691 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82691 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:47.005 killing process with pid 82691 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82691' 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 82691 00:11:47.005 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 82691 00:11:47.572 00:11:47.572 real 0m4.331s 00:11:47.572 user 0m4.683s 00:11:47.572 sys 0m1.423s 00:11:47.572 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.572 ************************************ 00:11:47.572 END TEST non_locking_app_on_locked_coremask 00:11:47.572 ************************************ 00:11:47.572 15:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.572 15:05:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:47.572 15:05:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:47.572 15:05:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:47.572 15:05:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.572 15:05:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:47.572 ************************************ 00:11:47.572 START TEST locking_app_on_unlocked_coremask 00:11:47.573 ************************************ 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=82765 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 82765 /var/tmp/spdk.sock 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 82765 ']' 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.573 15:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:47.573 [2024-07-23 15:05:42.845950] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:47.573 [2024-07-23 15:05:42.846124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82765 ] 00:11:47.573 [2024-07-23 15:05:42.986392] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:47.573 [2024-07-23 15:05:42.986468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.831 [2024-07-23 15:05:43.036539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=82781 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 82781 /var/tmp/spdk2.sock 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 82781 ']' 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.397 15:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 [2024-07-23 15:05:43.905073] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:48.656 [2024-07-23 15:05:43.905270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82781 ] 00:11:48.656 [2024-07-23 15:05:44.060477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.914 [2024-07-23 15:05:44.160909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.481 15:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.481 15:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:49.481 15:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 82781 00:11:49.481 15:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 82781 00:11:49.481 15:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 82765 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 82765 ']' 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 82765 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82765 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:50.424 killing process with pid 82765 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82765' 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 82765 00:11:50.424 15:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 82765 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 82781 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 82781 ']' 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 82781 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82781 00:11:51.374 killing process with pid 82781 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82781' 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 82781 00:11:51.374 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 82781 00:11:51.632 00:11:51.632 real 0m4.125s 00:11:51.632 user 0m4.592s 00:11:51.632 sys 0m1.238s 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:51.632 ************************************ 00:11:51.632 END TEST locking_app_on_unlocked_coremask 00:11:51.632 ************************************ 00:11:51.632 15:05:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:51.632 15:05:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:51.632 15:05:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:51.632 15:05:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.632 15:05:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:51.632 ************************************ 00:11:51.632 START TEST locking_app_on_locked_coremask 00:11:51.632 ************************************ 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=82850 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 82850 /var/tmp/spdk.sock 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 82850 ']' 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:51.632 15:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:51.632 [2024-07-23 15:05:47.050196] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:51.632 [2024-07-23 15:05:47.050389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82850 ] 00:11:51.891 [2024-07-23 15:05:47.203929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.891 [2024-07-23 15:05:47.260015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=82866 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 82866 /var/tmp/spdk2.sock 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 82866 /var/tmp/spdk2.sock 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 82866 /var/tmp/spdk2.sock 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 82866 ']' 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:52.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.826 15:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:52.826 [2024-07-23 15:05:48.036489] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:52.826 [2024-07-23 15:05:48.036645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82866 ] 00:11:52.826 [2024-07-23 15:05:48.185002] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 82850 has claimed it. 00:11:52.826 [2024-07-23 15:05:48.185083] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:53.393 ERROR: process (pid: 82866) is no longer running 00:11:53.393 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (82866) - No such process 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 82850 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 82850 00:11:53.393 15:05:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 82850 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 82850 ']' 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 82850 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82850 00:11:53.960 killing process with pid 82850 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82850' 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 82850 00:11:53.960 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 82850 00:11:54.526 00:11:54.526 real 0m2.686s 00:11:54.526 user 0m2.915s 00:11:54.526 sys 0m0.881s 00:11:54.526 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.526 ************************************ 00:11:54.526 15:05:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.526 END TEST locking_app_on_locked_coremask 00:11:54.526 ************************************ 00:11:54.526 15:05:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:54.526 15:05:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:54.526 15:05:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:54.526 15:05:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.526 15:05:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:54.526 ************************************ 00:11:54.526 START TEST locking_overlapped_coremask 00:11:54.526 ************************************ 00:11:54.526 15:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:11:54.526 15:05:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=82914 00:11:54.526 15:05:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 82914 /var/tmp/spdk.sock 00:11:54.526 15:05:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:54.526 15:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 82914 ']' 00:11:54.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.526 15:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.526 15:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.527 15:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.527 15:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.527 15:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.527 [2024-07-23 15:05:49.794902] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:54.527 [2024-07-23 15:05:49.795101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82914 ] 00:11:54.784 [2024-07-23 15:05:49.954353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.784 [2024-07-23 15:05:50.007583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.784 [2024-07-23 15:05:50.007700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.784 [2024-07-23 15:05:50.007586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=82932 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 82932 /var/tmp/spdk2.sock 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 82932 /var/tmp/spdk2.sock 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 82932 /var/tmp/spdk2.sock 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 82932 ']' 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:55.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.351 15:05:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:55.609 [2024-07-23 15:05:50.818752] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:55.609 [2024-07-23 15:05:50.819281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82932 ] 00:11:55.609 [2024-07-23 15:05:50.978609] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 82914 has claimed it. 00:11:55.609 [2024-07-23 15:05:50.978709] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:56.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (82932) - No such process 00:11:56.186 ERROR: process (pid: 82932) is no longer running 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 82914 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 82914 ']' 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 82914 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82914 00:11:56.186 killing process with pid 82914 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82914' 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 82914 00:11:56.186 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 82914 00:11:56.789 ************************************ 00:11:56.789 END TEST locking_overlapped_coremask 00:11:56.789 ************************************ 00:11:56.789 00:11:56.789 real 0m2.244s 00:11:56.789 user 0m6.043s 00:11:56.789 sys 0m0.630s 00:11:56.789 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.789 15:05:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:56.789 15:05:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:56.789 15:05:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:56.789 15:05:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:56.789 15:05:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.789 15:05:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:56.789 ************************************ 00:11:56.789 START TEST locking_overlapped_coremask_via_rpc 00:11:56.789 ************************************ 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=82976 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 82976 /var/tmp/spdk.sock 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 82976 ']' 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.789 15:05:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.789 [2024-07-23 15:05:52.098845] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:56.789 [2024-07-23 15:05:52.099073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82976 ] 00:11:57.048 [2024-07-23 15:05:52.248569] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:57.048 [2024-07-23 15:05:52.248629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.048 [2024-07-23 15:05:52.301528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.048 [2024-07-23 15:05:52.301625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.048 [2024-07-23 15:05:52.301724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=82994 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:57.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 82994 /var/tmp/spdk2.sock 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 82994 ']' 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.613 15:05:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.872 [2024-07-23 15:05:53.066139] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:11:57.872 [2024-07-23 15:05:53.066287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82994 ] 00:11:57.872 [2024-07-23 15:05:53.218456] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:57.872 [2024-07-23 15:05:53.218526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.133 [2024-07-23 15:05:53.331165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.133 [2024-07-23 15:05:53.331256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.133 [2024-07-23 15:05:53.331346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.700 [2024-07-23 15:05:54.049076] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 82976 has claimed it. 00:11:58.700 request: 00:11:58.700 { 00:11:58.700 "method": "framework_enable_cpumask_locks", 00:11:58.700 "req_id": 1 00:11:58.700 } 00:11:58.700 Got JSON-RPC error response 00:11:58.700 response: 00:11:58.700 { 00:11:58.700 "code": -32603, 00:11:58.700 "message": "Failed to claim CPU core: 2" 00:11:58.700 } 00:11:58.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 82976 /var/tmp/spdk.sock 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 82976 ']' 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.700 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 82994 /var/tmp/spdk2.sock 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 82994 ']' 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:58.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.958 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.216 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:59.216 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:59.216 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:59.216 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:59.216 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:59.216 00:11:59.216 real 0m2.585s 00:11:59.216 user 0m1.293s 00:11:59.216 sys 0m0.231s 00:11:59.216 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.216 15:05:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 ************************************ 00:11:59.216 END TEST locking_overlapped_coremask_via_rpc 00:11:59.216 ************************************ 00:11:59.216 15:05:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:59.216 15:05:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:59.216 15:05:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 82976 ]] 00:11:59.216 15:05:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 82976 00:11:59.216 15:05:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 82976 ']' 00:11:59.216 15:05:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 82976 00:11:59.217 15:05:54 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:59.475 15:05:54 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.475 15:05:54 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82976 00:11:59.475 killing process with pid 82976 00:11:59.475 15:05:54 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:59.475 15:05:54 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:59.475 15:05:54 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82976' 00:11:59.475 15:05:54 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 82976 00:11:59.475 15:05:54 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 82976 00:11:59.733 15:05:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 82994 ]] 00:11:59.733 15:05:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 82994 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 82994 ']' 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 82994 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82994 00:11:59.733 killing process with pid 82994 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82994' 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 82994 00:11:59.733 15:05:55 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 82994 00:12:00.299 15:05:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:00.299 15:05:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:00.299 15:05:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 82976 ]] 00:12:00.299 15:05:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 82976 00:12:00.299 15:05:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 82976 ']' 00:12:00.299 15:05:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 82976 00:12:00.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (82976) - No such process 00:12:00.299 Process with pid 82976 is not found 00:12:00.299 15:05:55 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 82976 is not found' 00:12:00.299 15:05:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 82994 ]] 00:12:00.299 15:05:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 82994 00:12:00.299 15:05:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 82994 ']' 00:12:00.299 15:05:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 82994 00:12:00.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (82994) - No such process 00:12:00.299 Process with pid 82994 is not found 00:12:00.299 15:05:55 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 82994 is not found' 00:12:00.299 15:05:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:00.299 00:12:00.299 real 0m21.295s 00:12:00.299 user 0m36.240s 00:12:00.299 sys 0m6.806s 00:12:00.299 15:05:55 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.299 ************************************ 00:12:00.299 END TEST cpu_locks 00:12:00.300 ************************************ 00:12:00.300 15:05:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:00.300 15:05:55 event -- common/autotest_common.sh@1142 -- # return 0 00:12:00.300 ************************************ 00:12:00.300 END TEST event 00:12:00.300 ************************************ 00:12:00.300 00:12:00.300 real 0m48.444s 00:12:00.300 user 1m31.403s 00:12:00.300 sys 0m11.058s 00:12:00.300 15:05:55 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.300 15:05:55 event -- common/autotest_common.sh@10 -- # set +x 00:12:00.300 15:05:55 -- common/autotest_common.sh@1142 -- # return 0 00:12:00.300 15:05:55 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:00.300 15:05:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:00.300 15:05:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.300 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:12:00.300 ************************************ 00:12:00.300 START TEST thread 00:12:00.300 ************************************ 00:12:00.300 15:05:55 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:00.300 * Looking for test storage... 00:12:00.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:00.558 15:05:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:00.558 15:05:55 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:00.558 15:05:55 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.558 15:05:55 thread -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 ************************************ 00:12:00.558 START TEST thread_poller_perf 00:12:00.558 ************************************ 00:12:00.559 15:05:55 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:00.559 [2024-07-23 15:05:55.772710] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:00.559 [2024-07-23 15:05:55.773497] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83118 ] 00:12:00.559 [2024-07-23 15:05:55.917037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.559 [2024-07-23 15:05:55.966692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.559 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:01.932 ====================================== 00:12:01.932 busy:2110199462 (cyc) 00:12:01.932 total_run_count: 362000 00:12:01.932 tsc_hz: 2100000000 (cyc) 00:12:01.932 ====================================== 00:12:01.932 poller_cost: 5829 (cyc), 2775 (nsec) 00:12:01.932 00:12:01.932 real 0m1.325s 00:12:01.932 user 0m1.144s 00:12:01.932 sys 0m0.079s 00:12:01.932 15:05:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.932 15:05:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:01.932 ************************************ 00:12:01.932 END TEST thread_poller_perf 00:12:01.932 ************************************ 00:12:01.932 15:05:57 thread -- common/autotest_common.sh@1142 -- # return 0 00:12:01.932 15:05:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:01.932 15:05:57 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:01.932 15:05:57 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.932 15:05:57 thread -- common/autotest_common.sh@10 -- # set +x 00:12:01.932 ************************************ 00:12:01.932 START TEST thread_poller_perf 00:12:01.932 ************************************ 00:12:01.932 15:05:57 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:01.932 [2024-07-23 15:05:57.167539] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:01.932 [2024-07-23 15:05:57.167828] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83155 ] 00:12:01.932 [2024-07-23 15:05:57.326222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.190 [2024-07-23 15:05:57.376944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.190 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:03.125 ====================================== 00:12:03.125 busy:2103820356 (cyc) 00:12:03.125 total_run_count: 4777000 00:12:03.125 tsc_hz: 2100000000 (cyc) 00:12:03.125 ====================================== 00:12:03.125 poller_cost: 440 (cyc), 209 (nsec) 00:12:03.125 00:12:03.125 real 0m1.346s 00:12:03.125 user 0m1.139s 00:12:03.125 sys 0m0.106s 00:12:03.125 ************************************ 00:12:03.125 END TEST thread_poller_perf 00:12:03.125 ************************************ 00:12:03.125 15:05:58 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.125 15:05:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:03.125 15:05:58 thread -- common/autotest_common.sh@1142 -- # return 0 00:12:03.125 15:05:58 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:12:03.125 15:05:58 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:03.125 15:05:58 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:03.125 15:05:58 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.125 15:05:58 thread -- common/autotest_common.sh@10 -- # set +x 00:12:03.125 ************************************ 00:12:03.125 START TEST thread_spdk_lock 00:12:03.125 ************************************ 00:12:03.125 15:05:58 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:03.383 [2024-07-23 15:05:58.563515] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:03.383 [2024-07-23 15:05:58.563661] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83186 ] 00:12:03.383 [2024-07-23 15:05:58.702762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:03.383 [2024-07-23 15:05:58.753986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.383 [2024-07-23 15:05:58.754094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.949 [2024-07-23 15:05:59.285064] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:03.949 [2024-07-23 15:05:59.285150] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:12:03.949 [2024-07-23 15:05:59.285172] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x5c5eb7b5d5c0 00:12:03.949 [2024-07-23 15:05:59.286639] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:03.949 [2024-07-23 15:05:59.286748] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:03.949 [2024-07-23 15:05:59.286785] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:03.949 Starting test contend 00:12:03.949 Worker Delay Wait us Hold us Total us 00:12:03.949 0 3 122046 193590 315636 00:12:03.949 1 5 48280 296920 345200 00:12:03.949 PASS test contend 00:12:03.949 Starting test hold_by_poller 00:12:03.949 PASS test hold_by_poller 00:12:03.949 Starting test hold_by_message 00:12:03.950 PASS test hold_by_message 00:12:03.950 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:12:03.950 100014 assertions passed 00:12:03.950 0 assertions failed 00:12:04.208 00:12:04.208 real 0m0.842s 00:12:04.208 user 0m1.179s 00:12:04.208 sys 0m0.096s 00:12:04.208 15:05:59 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.208 15:05:59 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:12:04.208 ************************************ 00:12:04.208 END TEST thread_spdk_lock 00:12:04.208 ************************************ 00:12:04.208 15:05:59 thread -- common/autotest_common.sh@1142 -- # return 0 00:12:04.208 00:12:04.209 real 0m3.791s 00:12:04.209 user 0m3.559s 00:12:04.209 sys 0m0.466s 00:12:04.209 15:05:59 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.209 15:05:59 thread -- common/autotest_common.sh@10 -- # set +x 00:12:04.209 ************************************ 00:12:04.209 END TEST thread 00:12:04.209 ************************************ 00:12:04.209 15:05:59 -- common/autotest_common.sh@1142 -- # return 0 00:12:04.209 15:05:59 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:04.209 15:05:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:04.209 15:05:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.209 15:05:59 -- common/autotest_common.sh@10 -- # set +x 00:12:04.209 ************************************ 00:12:04.209 START TEST accel 00:12:04.209 ************************************ 00:12:04.209 15:05:59 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:04.209 * Looking for test storage... 00:12:04.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:04.209 15:05:59 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:04.209 15:05:59 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:12:04.209 15:05:59 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:04.209 15:05:59 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=83257 00:12:04.209 15:05:59 accel -- accel/accel.sh@63 -- # waitforlisten 83257 00:12:04.209 15:05:59 accel -- common/autotest_common.sh@829 -- # '[' -z 83257 ']' 00:12:04.209 15:05:59 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.209 15:05:59 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.209 15:05:59 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.209 15:05:59 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.209 15:05:59 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:04.209 15:05:59 accel -- common/autotest_common.sh@10 -- # set +x 00:12:04.209 15:05:59 accel -- accel/accel.sh@61 -- # build_accel_config 00:12:04.209 15:05:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:04.209 15:05:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:04.209 15:05:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:04.209 15:05:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:04.209 15:05:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:04.209 15:05:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:04.209 15:05:59 accel -- accel/accel.sh@41 -- # jq -r . 00:12:04.467 [2024-07-23 15:05:59.662779] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:04.467 [2024-07-23 15:05:59.663009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83257 ] 00:12:04.467 [2024-07-23 15:05:59.817916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.467 [2024-07-23 15:05:59.867234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@862 -- # return 0 00:12:05.402 15:06:00 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:05.402 15:06:00 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:05.402 15:06:00 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:05.402 15:06:00 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:05.402 15:06:00 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:05.402 15:06:00 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:05.402 15:06:00 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@10 -- # set +x 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # IFS== 00:12:05.402 15:06:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:05.402 15:06:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:05.402 15:06:00 accel -- accel/accel.sh@75 -- # killprocess 83257 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@948 -- # '[' -z 83257 ']' 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@952 -- # kill -0 83257 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@953 -- # uname 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83257 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:05.402 killing process with pid 83257 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83257' 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@967 -- # kill 83257 00:12:05.402 15:06:00 accel -- common/autotest_common.sh@972 -- # wait 83257 00:12:05.660 15:06:01 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:05.660 15:06:01 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:05.660 15:06:01 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:05.660 15:06:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.660 15:06:01 accel -- common/autotest_common.sh@10 -- # set +x 00:12:05.660 15:06:01 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:05.660 15:06:01 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:05.919 15:06:01 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.919 15:06:01 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:05.919 15:06:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:05.919 15:06:01 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:05.919 15:06:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:05.919 15:06:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.919 15:06:01 accel -- common/autotest_common.sh@10 -- # set +x 00:12:05.919 ************************************ 00:12:05.919 START TEST accel_missing_filename 00:12:05.919 ************************************ 00:12:05.919 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:12:05.919 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:12:05.919 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:05.919 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:05.920 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.920 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:05.920 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.920 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:05.920 15:06:01 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:05.920 [2024-07-23 15:06:01.210991] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:05.920 [2024-07-23 15:06:01.211195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83316 ] 00:12:06.178 [2024-07-23 15:06:01.365477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.178 [2024-07-23 15:06:01.415052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.178 [2024-07-23 15:06:01.462834] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:06.178 [2024-07-23 15:06:01.535736] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:12:06.437 A filename is required. 00:12:06.437 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:12:06.437 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.437 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:12:06.437 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:12:06.437 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:12:06.437 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.437 00:12:06.437 real 0m0.471s 00:12:06.437 user 0m0.206s 00:12:06.437 sys 0m0.176s 00:12:06.437 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.437 15:06:01 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:06.437 ************************************ 00:12:06.437 END TEST accel_missing_filename 00:12:06.437 ************************************ 00:12:06.437 15:06:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:06.437 15:06:01 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:06.437 15:06:01 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:12:06.437 15:06:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.437 15:06:01 accel -- common/autotest_common.sh@10 -- # set +x 00:12:06.437 ************************************ 00:12:06.437 START TEST accel_compress_verify 00:12:06.437 ************************************ 00:12:06.437 15:06:01 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:06.437 15:06:01 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:12:06.437 15:06:01 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:06.437 15:06:01 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:06.437 15:06:01 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.437 15:06:01 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:06.437 15:06:01 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.437 15:06:01 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:06.437 15:06:01 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:06.437 [2024-07-23 15:06:01.743538] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:06.437 [2024-07-23 15:06:01.743758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83336 ] 00:12:06.695 [2024-07-23 15:06:01.899360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.695 [2024-07-23 15:06:01.952582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.695 [2024-07-23 15:06:02.004403] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:06.695 [2024-07-23 15:06:02.080779] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:12:06.955 00:12:06.955 Compression does not support the verify option, aborting. 00:12:06.955 15:06:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:12:06.955 15:06:02 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.955 15:06:02 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:12:06.955 15:06:02 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:12:06.955 15:06:02 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:12:06.955 15:06:02 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.955 00:12:06.955 real 0m0.478s 00:12:06.955 user 0m0.232s 00:12:06.955 sys 0m0.153s 00:12:06.955 15:06:02 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.955 15:06:02 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:06.955 ************************************ 00:12:06.955 END TEST accel_compress_verify 00:12:06.955 ************************************ 00:12:06.955 15:06:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:06.955 15:06:02 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:06.955 15:06:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:06.955 15:06:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.955 15:06:02 accel -- common/autotest_common.sh@10 -- # set +x 00:12:06.955 ************************************ 00:12:06.955 START TEST accel_wrong_workload 00:12:06.955 ************************************ 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:06.955 15:06:02 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:06.955 Unsupported workload type: foobar 00:12:06.955 [2024-07-23 15:06:02.260248] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:06.955 accel_perf options: 00:12:06.955 [-h help message] 00:12:06.955 [-q queue depth per core] 00:12:06.955 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:06.955 [-T number of threads per core 00:12:06.955 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:06.955 [-t time in seconds] 00:12:06.955 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:06.955 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:06.955 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:06.955 [-l for compress/decompress workloads, name of uncompressed input file 00:12:06.955 [-S for crc32c workload, use this seed value (default 0) 00:12:06.955 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:06.955 [-f for fill workload, use this BYTE value (default 255) 00:12:06.955 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:06.955 [-y verify result if this switch is on] 00:12:06.955 [-a tasks to allocate per core (default: same value as -q)] 00:12:06.955 Can be used to spread operations across a wider range of memory. 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.955 00:12:06.955 real 0m0.060s 00:12:06.955 user 0m0.037s 00:12:06.955 sys 0m0.035s 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.955 15:06:02 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:06.955 ************************************ 00:12:06.956 END TEST accel_wrong_workload 00:12:06.956 ************************************ 00:12:06.956 15:06:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:06.956 15:06:02 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:06.956 15:06:02 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:12:06.956 15:06:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.956 15:06:02 accel -- common/autotest_common.sh@10 -- # set +x 00:12:06.956 ************************************ 00:12:06.956 START TEST accel_negative_buffers 00:12:06.956 ************************************ 00:12:06.956 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:06.956 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:12:06.956 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:06.956 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:06.956 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.956 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:06.956 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.956 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:06.956 15:06:02 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:06.956 -x option must be non-negative. 00:12:06.956 [2024-07-23 15:06:02.381404] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:07.215 accel_perf options: 00:12:07.215 [-h help message] 00:12:07.215 [-q queue depth per core] 00:12:07.215 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:07.215 [-T number of threads per core 00:12:07.215 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:07.215 [-t time in seconds] 00:12:07.215 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:07.215 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:07.215 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:07.215 [-l for compress/decompress workloads, name of uncompressed input file 00:12:07.215 [-S for crc32c workload, use this seed value (default 0) 00:12:07.215 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:07.215 [-f for fill workload, use this BYTE value (default 255) 00:12:07.215 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:07.215 [-y verify result if this switch is on] 00:12:07.215 [-a tasks to allocate per core (default: same value as -q)] 00:12:07.215 Can be used to spread operations across a wider range of memory. 00:12:07.215 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:12:07.215 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.215 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.215 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.215 00:12:07.215 real 0m0.071s 00:12:07.215 user 0m0.035s 00:12:07.215 sys 0m0.047s 00:12:07.215 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.215 15:06:02 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:07.215 ************************************ 00:12:07.215 END TEST accel_negative_buffers 00:12:07.215 ************************************ 00:12:07.215 15:06:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:07.215 15:06:02 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:07.215 15:06:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:07.215 15:06:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.215 15:06:02 accel -- common/autotest_common.sh@10 -- # set +x 00:12:07.215 ************************************ 00:12:07.215 START TEST accel_crc32c 00:12:07.215 ************************************ 00:12:07.215 15:06:02 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:07.215 15:06:02 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:07.215 [2024-07-23 15:06:02.501029] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:07.215 [2024-07-23 15:06:02.501175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83403 ] 00:12:07.215 [2024-07-23 15:06:02.641483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.474 [2024-07-23 15:06:02.690311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.474 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:07.475 15:06:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:08.852 15:06:03 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:08.852 00:12:08.852 real 0m1.441s 00:12:08.852 user 0m1.225s 00:12:08.852 sys 0m0.136s 00:12:08.852 15:06:03 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.852 15:06:03 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:08.852 ************************************ 00:12:08.852 END TEST accel_crc32c 00:12:08.852 ************************************ 00:12:08.852 15:06:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:08.852 15:06:03 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:08.852 15:06:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:08.852 15:06:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.852 15:06:03 accel -- common/autotest_common.sh@10 -- # set +x 00:12:08.852 ************************************ 00:12:08.852 START TEST accel_crc32c_C2 00:12:08.852 ************************************ 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:08.852 15:06:03 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:08.852 [2024-07-23 15:06:04.010418] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:08.852 [2024-07-23 15:06:04.010650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83433 ] 00:12:08.852 [2024-07-23 15:06:04.165295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.852 [2024-07-23 15:06:04.212707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.852 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:08.853 15:06:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:10.228 00:12:10.228 real 0m1.462s 00:12:10.228 user 0m1.230s 00:12:10.228 sys 0m0.153s 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.228 ************************************ 00:12:10.228 END TEST accel_crc32c_C2 00:12:10.228 ************************************ 00:12:10.228 15:06:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:10.228 15:06:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:10.228 15:06:05 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:10.228 15:06:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:10.228 15:06:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.228 15:06:05 accel -- common/autotest_common.sh@10 -- # set +x 00:12:10.228 ************************************ 00:12:10.228 START TEST accel_copy 00:12:10.228 ************************************ 00:12:10.228 15:06:05 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:12:10.228 15:06:05 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:10.228 15:06:05 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:10.228 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.228 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.228 15:06:05 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:10.228 15:06:05 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:10.228 15:06:05 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:10.229 15:06:05 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:10.229 15:06:05 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:10.229 15:06:05 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.229 15:06:05 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.229 15:06:05 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:10.229 15:06:05 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:10.229 15:06:05 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:10.229 [2024-07-23 15:06:05.515747] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:10.229 [2024-07-23 15:06:05.515936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83473 ] 00:12:10.229 [2024-07-23 15:06:05.651643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.488 [2024-07-23 15:06:05.698672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:10.488 15:06:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:11.864 15:06:06 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:11.864 00:12:11.864 real 0m1.437s 00:12:11.864 user 0m1.214s 00:12:11.864 sys 0m0.141s 00:12:11.864 15:06:06 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.864 15:06:06 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:11.864 ************************************ 00:12:11.864 END TEST accel_copy 00:12:11.864 ************************************ 00:12:11.864 15:06:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:11.864 15:06:06 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:11.864 15:06:06 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:11.864 15:06:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.864 15:06:06 accel -- common/autotest_common.sh@10 -- # set +x 00:12:11.864 ************************************ 00:12:11.864 START TEST accel_fill 00:12:11.864 ************************************ 00:12:11.864 15:06:06 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:11.864 15:06:06 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:11.864 [2024-07-23 15:06:07.015070] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:11.864 [2024-07-23 15:06:07.015291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83504 ] 00:12:11.864 [2024-07-23 15:06:07.170693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.864 [2024-07-23 15:06:07.218115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:11.865 15:06:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.240 15:06:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:13.241 15:06:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:13.241 00:12:13.241 real 0m1.456s 00:12:13.241 user 0m1.226s 00:12:13.241 sys 0m0.150s 00:12:13.241 15:06:08 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.241 15:06:08 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:13.241 ************************************ 00:12:13.241 END TEST accel_fill 00:12:13.241 ************************************ 00:12:13.241 15:06:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:13.241 15:06:08 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:13.241 15:06:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:13.241 15:06:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.241 15:06:08 accel -- common/autotest_common.sh@10 -- # set +x 00:12:13.241 ************************************ 00:12:13.241 START TEST accel_copy_crc32c 00:12:13.241 ************************************ 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:13.241 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:13.241 [2024-07-23 15:06:08.528441] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:13.241 [2024-07-23 15:06:08.528603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83540 ] 00:12:13.241 [2024-07-23 15:06:08.668022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.499 [2024-07-23 15:06:08.714896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:13.500 15:06:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:14.875 00:12:14.875 real 0m1.440s 00:12:14.875 user 0m1.214s 00:12:14.875 sys 0m0.143s 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.875 15:06:09 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:14.875 ************************************ 00:12:14.875 END TEST accel_copy_crc32c 00:12:14.875 ************************************ 00:12:14.875 15:06:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:14.875 15:06:09 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:14.875 15:06:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:14.875 15:06:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.875 15:06:09 accel -- common/autotest_common.sh@10 -- # set +x 00:12:14.875 ************************************ 00:12:14.875 START TEST accel_copy_crc32c_C2 00:12:14.876 ************************************ 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:14.876 15:06:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:14.876 [2024-07-23 15:06:10.016256] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:14.876 [2024-07-23 15:06:10.016423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83575 ] 00:12:14.876 [2024-07-23 15:06:10.155553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.876 [2024-07-23 15:06:10.205961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.876 15:06:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:16.290 00:12:16.290 real 0m1.441s 00:12:16.290 user 0m1.212s 00:12:16.290 sys 0m0.148s 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.290 15:06:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:16.290 ************************************ 00:12:16.290 END TEST accel_copy_crc32c_C2 00:12:16.290 ************************************ 00:12:16.290 15:06:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:16.290 15:06:11 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:16.290 15:06:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:16.290 15:06:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.290 15:06:11 accel -- common/autotest_common.sh@10 -- # set +x 00:12:16.290 ************************************ 00:12:16.290 START TEST accel_dualcast 00:12:16.290 ************************************ 00:12:16.290 15:06:11 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:16.290 15:06:11 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:16.291 15:06:11 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:16.291 [2024-07-23 15:06:11.519633] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:16.291 [2024-07-23 15:06:11.519870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83611 ] 00:12:16.291 [2024-07-23 15:06:11.673770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.549 [2024-07-23 15:06:11.721214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.549 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.550 15:06:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:17.926 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:17.927 15:06:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:17.927 00:12:17.927 real 0m1.467s 00:12:17.927 user 0m1.234s 00:12:17.927 sys 0m0.151s 00:12:17.927 15:06:12 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.927 15:06:12 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:17.927 ************************************ 00:12:17.927 END TEST accel_dualcast 00:12:17.927 ************************************ 00:12:17.927 15:06:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:17.927 15:06:12 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:17.927 15:06:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:17.927 15:06:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.927 15:06:12 accel -- common/autotest_common.sh@10 -- # set +x 00:12:17.927 ************************************ 00:12:17.927 START TEST accel_compare 00:12:17.927 ************************************ 00:12:17.927 15:06:13 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:17.927 [2024-07-23 15:06:13.032510] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:17.927 [2024-07-23 15:06:13.032643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83641 ] 00:12:17.927 [2024-07-23 15:06:13.173489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.927 [2024-07-23 15:06:13.224628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.927 15:06:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.300 15:06:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:19.301 15:06:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:19.301 00:12:19.301 real 0m1.448s 00:12:19.301 user 0m1.231s 00:12:19.301 sys 0m0.130s 00:12:19.301 ************************************ 00:12:19.301 END TEST accel_compare 00:12:19.301 ************************************ 00:12:19.301 15:06:14 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.301 15:06:14 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:19.301 15:06:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:19.301 15:06:14 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:19.301 15:06:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:19.301 15:06:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.301 15:06:14 accel -- common/autotest_common.sh@10 -- # set +x 00:12:19.301 ************************************ 00:12:19.301 START TEST accel_xor 00:12:19.301 ************************************ 00:12:19.301 15:06:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:19.301 15:06:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:19.301 [2024-07-23 15:06:14.543415] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:19.301 [2024-07-23 15:06:14.543638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83682 ] 00:12:19.301 [2024-07-23 15:06:14.689188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.560 [2024-07-23 15:06:14.737518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.560 15:06:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:20.936 15:06:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:20.936 00:12:20.936 real 0m1.462s 00:12:20.936 user 0m0.020s 00:12:20.936 sys 0m0.002s 00:12:20.936 15:06:15 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.936 15:06:15 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:20.936 ************************************ 00:12:20.936 END TEST accel_xor 00:12:20.936 ************************************ 00:12:20.936 15:06:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:20.936 15:06:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:20.936 15:06:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:20.936 15:06:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.936 15:06:16 accel -- common/autotest_common.sh@10 -- # set +x 00:12:20.936 ************************************ 00:12:20.936 START TEST accel_xor 00:12:20.936 ************************************ 00:12:20.936 15:06:16 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:20.936 [2024-07-23 15:06:16.068233] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:20.936 [2024-07-23 15:06:16.068444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83712 ] 00:12:20.936 [2024-07-23 15:06:16.219768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.936 [2024-07-23 15:06:16.264639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.936 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:20.937 15:06:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:22.311 15:06:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:22.311 00:12:22.311 real 0m1.462s 00:12:22.311 user 0m1.223s 00:12:22.311 sys 0m0.156s 00:12:22.311 15:06:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.311 15:06:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:22.311 ************************************ 00:12:22.311 END TEST accel_xor 00:12:22.311 ************************************ 00:12:22.311 15:06:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:22.311 15:06:17 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:22.311 15:06:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:22.311 15:06:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.311 15:06:17 accel -- common/autotest_common.sh@10 -- # set +x 00:12:22.311 ************************************ 00:12:22.311 START TEST accel_dif_verify 00:12:22.311 ************************************ 00:12:22.311 15:06:17 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:12:22.311 15:06:17 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:22.311 15:06:17 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:22.311 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.311 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.311 15:06:17 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:22.311 15:06:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:22.311 15:06:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:22.312 15:06:17 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:22.312 15:06:17 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:22.312 15:06:17 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:22.312 15:06:17 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:22.312 15:06:17 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:22.312 15:06:17 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:22.312 15:06:17 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:22.312 [2024-07-23 15:06:17.586603] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:22.312 [2024-07-23 15:06:17.586886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83751 ] 00:12:22.312 [2024-07-23 15:06:17.738706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.570 [2024-07-23 15:06:17.784081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.570 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.571 15:06:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:23.968 15:06:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:23.968 15:06:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:23.968 00:12:23.968 real 0m1.452s 00:12:23.968 user 0m1.229s 00:12:23.968 sys 0m0.145s 00:12:23.968 15:06:19 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.968 15:06:19 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:23.968 ************************************ 00:12:23.968 END TEST accel_dif_verify 00:12:23.968 ************************************ 00:12:23.968 15:06:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:23.968 15:06:19 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:23.968 15:06:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:23.968 15:06:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.968 15:06:19 accel -- common/autotest_common.sh@10 -- # set +x 00:12:23.968 ************************************ 00:12:23.968 START TEST accel_dif_generate 00:12:23.968 ************************************ 00:12:23.968 15:06:19 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:23.968 [2024-07-23 15:06:19.099502] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:23.968 [2024-07-23 15:06:19.099703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83796 ] 00:12:23.968 [2024-07-23 15:06:19.248205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.968 [2024-07-23 15:06:19.294545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:23.968 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.969 15:06:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:25.351 ************************************ 00:12:25.351 END TEST accel_dif_generate 00:12:25.351 ************************************ 00:12:25.351 15:06:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:25.351 00:12:25.351 real 0m1.455s 00:12:25.351 user 0m1.233s 00:12:25.351 sys 0m0.141s 00:12:25.351 15:06:20 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.351 15:06:20 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:25.351 15:06:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:25.351 15:06:20 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:25.351 15:06:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:25.351 15:06:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.351 15:06:20 accel -- common/autotest_common.sh@10 -- # set +x 00:12:25.351 ************************************ 00:12:25.351 START TEST accel_dif_generate_copy 00:12:25.351 ************************************ 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:25.351 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:25.351 [2024-07-23 15:06:20.614309] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:25.351 [2024-07-23 15:06:20.614479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83832 ] 00:12:25.351 [2024-07-23 15:06:20.766071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.610 [2024-07-23 15:06:20.812401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.610 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.611 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.611 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.611 15:06:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:26.988 00:12:26.988 real 0m1.459s 00:12:26.988 user 0m0.019s 00:12:26.988 sys 0m0.001s 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.988 ************************************ 00:12:26.988 END TEST accel_dif_generate_copy 00:12:26.988 ************************************ 00:12:26.988 15:06:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:26.988 15:06:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:26.988 15:06:22 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:26.988 15:06:22 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.988 15:06:22 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:26.988 15:06:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.988 15:06:22 accel -- common/autotest_common.sh@10 -- # set +x 00:12:26.988 ************************************ 00:12:26.988 START TEST accel_comp 00:12:26.988 ************************************ 00:12:26.988 15:06:22 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:26.988 [2024-07-23 15:06:22.132495] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:26.988 [2024-07-23 15:06:22.132687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83867 ] 00:12:26.988 [2024-07-23 15:06:22.284007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.988 [2024-07-23 15:06:22.329460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:26.988 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.989 15:06:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:28.366 15:06:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:28.366 00:12:28.366 real 0m1.461s 00:12:28.366 user 0m1.220s 00:12:28.366 sys 0m0.168s 00:12:28.366 15:06:23 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.366 15:06:23 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:28.366 ************************************ 00:12:28.366 END TEST accel_comp 00:12:28.366 ************************************ 00:12:28.366 15:06:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:28.366 15:06:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:28.366 15:06:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:28.366 15:06:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.366 15:06:23 accel -- common/autotest_common.sh@10 -- # set +x 00:12:28.366 ************************************ 00:12:28.366 START TEST accel_decomp 00:12:28.366 ************************************ 00:12:28.366 15:06:23 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:28.366 15:06:23 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:28.366 [2024-07-23 15:06:23.651058] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:28.366 [2024-07-23 15:06:23.651253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83903 ] 00:12:28.625 [2024-07-23 15:06:23.804920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.625 [2024-07-23 15:06:23.851109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.625 15:06:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:30.004 15:06:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:30.004 00:12:30.004 real 0m1.475s 00:12:30.004 user 0m1.224s 00:12:30.004 sys 0m0.185s 00:12:30.004 15:06:25 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.004 15:06:25 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:30.004 ************************************ 00:12:30.004 END TEST accel_decomp 00:12:30.004 ************************************ 00:12:30.004 15:06:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:30.004 15:06:25 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:30.004 15:06:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:30.004 15:06:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.004 15:06:25 accel -- common/autotest_common.sh@10 -- # set +x 00:12:30.004 ************************************ 00:12:30.004 START TEST accel_decomp_full 00:12:30.004 ************************************ 00:12:30.004 15:06:25 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:12:30.004 15:06:25 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:12:30.004 [2024-07-23 15:06:25.182609] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:30.004 [2024-07-23 15:06:25.182854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83933 ] 00:12:30.004 [2024-07-23 15:06:25.335417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.004 [2024-07-23 15:06:25.402904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.264 15:06:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:31.640 15:06:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:31.640 00:12:31.640 real 0m1.503s 00:12:31.640 user 0m1.268s 00:12:31.640 sys 0m0.163s 00:12:31.640 15:06:26 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.640 15:06:26 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:12:31.640 ************************************ 00:12:31.640 END TEST accel_decomp_full 00:12:31.640 ************************************ 00:12:31.640 15:06:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:31.640 15:06:26 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:31.640 15:06:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:31.640 15:06:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.640 15:06:26 accel -- common/autotest_common.sh@10 -- # set +x 00:12:31.640 ************************************ 00:12:31.640 START TEST accel_decomp_mcore 00:12:31.640 ************************************ 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:31.640 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:31.641 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:31.641 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:31.641 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:31.641 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:31.641 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:31.641 15:06:26 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:31.641 [2024-07-23 15:06:26.747600] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:31.641 [2024-07-23 15:06:26.747780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83974 ] 00:12:31.641 [2024-07-23 15:06:26.901155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.641 [2024-07-23 15:06:26.961835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.641 [2024-07-23 15:06:26.961958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.641 [2024-07-23 15:06:26.962010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.641 [2024-07-23 15:06:26.962006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.641 15:06:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:33.015 00:12:33.015 real 0m1.505s 00:12:33.015 user 0m0.009s 00:12:33.015 sys 0m0.007s 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.015 15:06:28 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:33.015 ************************************ 00:12:33.015 END TEST accel_decomp_mcore 00:12:33.015 ************************************ 00:12:33.015 15:06:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:33.015 15:06:28 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:33.015 15:06:28 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:33.015 15:06:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.015 15:06:28 accel -- common/autotest_common.sh@10 -- # set +x 00:12:33.015 ************************************ 00:12:33.015 START TEST accel_decomp_full_mcore 00:12:33.015 ************************************ 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:33.015 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:33.015 [2024-07-23 15:06:28.302089] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:33.015 [2024-07-23 15:06:28.302221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84007 ] 00:12:33.015 [2024-07-23 15:06:28.439642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.274 [2024-07-23 15:06:28.487226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.274 [2024-07-23 15:06:28.487403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.274 [2024-07-23 15:06:28.487440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.274 [2024-07-23 15:06:28.487565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.274 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.275 15:06:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.649 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.649 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 ************************************ 00:12:34.650 END TEST accel_decomp_full_mcore 00:12:34.650 ************************************ 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:34.650 00:12:34.650 real 0m1.468s 00:12:34.650 user 0m0.015s 00:12:34.650 sys 0m0.004s 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.650 15:06:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:34.650 15:06:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:34.650 15:06:29 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:34.650 15:06:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:34.650 15:06:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.650 15:06:29 accel -- common/autotest_common.sh@10 -- # set +x 00:12:34.650 ************************************ 00:12:34.650 START TEST accel_decomp_mthread 00:12:34.650 ************************************ 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:34.650 15:06:29 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:34.650 [2024-07-23 15:06:29.831463] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:34.650 [2024-07-23 15:06:29.831642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84050 ] 00:12:34.650 [2024-07-23 15:06:29.973850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.650 [2024-07-23 15:06:30.021531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.650 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:34.908 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:34.909 15:06:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:35.844 00:12:35.844 real 0m1.458s 00:12:35.844 user 0m0.018s 00:12:35.844 sys 0m0.004s 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.844 ************************************ 00:12:35.844 15:06:31 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:35.844 END TEST accel_decomp_mthread 00:12:35.844 ************************************ 00:12:36.102 15:06:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:36.102 15:06:31 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:36.102 15:06:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:36.102 15:06:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.102 15:06:31 accel -- common/autotest_common.sh@10 -- # set +x 00:12:36.102 ************************************ 00:12:36.102 START TEST accel_decomp_full_mthread 00:12:36.102 ************************************ 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:36.102 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:36.102 [2024-07-23 15:06:31.354912] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:36.102 [2024-07-23 15:06:31.355147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84081 ] 00:12:36.102 [2024-07-23 15:06:31.509291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.361 [2024-07-23 15:06:31.554231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:36.361 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:36.362 15:06:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:37.740 00:12:37.740 real 0m1.493s 00:12:37.740 user 0m0.014s 00:12:37.740 sys 0m0.004s 00:12:37.740 ************************************ 00:12:37.740 END TEST accel_decomp_full_mthread 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.740 15:06:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:37.740 ************************************ 00:12:37.740 15:06:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:37.740 15:06:32 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:12:37.740 15:06:32 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:37.740 15:06:32 accel -- accel/accel.sh@137 -- # build_accel_config 00:12:37.740 15:06:32 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:37.740 15:06:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:37.740 15:06:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.740 15:06:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:37.740 15:06:32 accel -- common/autotest_common.sh@10 -- # set +x 00:12:37.740 15:06:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:37.740 15:06:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:37.740 15:06:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:37.740 15:06:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:37.740 15:06:32 accel -- accel/accel.sh@41 -- # jq -r . 00:12:37.740 ************************************ 00:12:37.740 START TEST accel_dif_functional_tests 00:12:37.740 ************************************ 00:12:37.740 15:06:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:37.740 [2024-07-23 15:06:32.948601] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:37.740 [2024-07-23 15:06:32.948779] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84118 ] 00:12:37.740 [2024-07-23 15:06:33.100061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:37.740 [2024-07-23 15:06:33.150984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.740 [2024-07-23 15:06:33.151008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.740 [2024-07-23 15:06:33.151107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.000 00:12:38.000 00:12:38.000 CUnit - A unit testing framework for C - Version 2.1-3 00:12:38.000 http://cunit.sourceforge.net/ 00:12:38.000 00:12:38.000 00:12:38.000 Suite: accel_dif 00:12:38.000 Test: verify: DIF generated, GUARD check ...passed 00:12:38.000 Test: verify: DIF generated, APPTAG check ...passed 00:12:38.000 Test: verify: DIF generated, REFTAG check ...passed 00:12:38.000 Test: verify: DIF not generated, GUARD check ...[2024-07-23 15:06:33.224141] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:38.000 passed 00:12:38.000 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 15:06:33.224381] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:38.000 passed 00:12:38.000 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 15:06:33.224667] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:38.000 passed 00:12:38.000 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:38.000 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 15:06:33.224880] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:38.000 passed 00:12:38.000 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:38.000 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:38.000 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:38.000 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 15:06:33.225225] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:38.000 passed 00:12:38.000 Test: verify copy: DIF generated, GUARD check ...passed 00:12:38.000 Test: verify copy: DIF generated, APPTAG check ...passed 00:12:38.000 Test: verify copy: DIF generated, REFTAG check ...passed 00:12:38.000 Test: verify copy: DIF not generated, GUARD check ...passed 00:12:38.000 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 15:06:33.225586] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:38.000 [2024-07-23 15:06:33.225773] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:38.000 passed 00:12:38.000 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 15:06:33.225971] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:38.000 passed 00:12:38.000 Test: generate copy: DIF generated, GUARD check ...passed 00:12:38.000 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:38.000 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:38.000 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:38.000 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:38.000 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:38.000 Test: generate copy: iovecs-len validate ...[2024-07-23 15:06:33.226501] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:38.000 passed 00:12:38.000 Test: generate copy: buffer alignment validate ...passed 00:12:38.000 00:12:38.000 Run Summary: Type Total Ran Passed Failed Inactive 00:12:38.000 suites 1 1 n/a 0 0 00:12:38.000 tests 26 26 26 0 0 00:12:38.000 asserts 115 115 115 0 n/a 00:12:38.000 00:12:38.000 Elapsed time = 0.007 seconds 00:12:38.258 00:12:38.258 real 0m0.592s 00:12:38.258 user 0m0.616s 00:12:38.258 sys 0m0.234s 00:12:38.258 15:06:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.258 15:06:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:12:38.258 ************************************ 00:12:38.258 END TEST accel_dif_functional_tests 00:12:38.258 ************************************ 00:12:38.258 15:06:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:38.258 00:12:38.258 real 0m34.026s 00:12:38.258 user 0m34.474s 00:12:38.258 sys 0m5.337s 00:12:38.258 15:06:33 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.258 15:06:33 accel -- common/autotest_common.sh@10 -- # set +x 00:12:38.258 ************************************ 00:12:38.258 END TEST accel 00:12:38.258 ************************************ 00:12:38.258 15:06:33 -- common/autotest_common.sh@1142 -- # return 0 00:12:38.258 15:06:33 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:38.258 15:06:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:38.258 15:06:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.258 15:06:33 -- common/autotest_common.sh@10 -- # set +x 00:12:38.258 ************************************ 00:12:38.258 START TEST accel_rpc 00:12:38.258 ************************************ 00:12:38.258 15:06:33 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:38.258 * Looking for test storage... 00:12:38.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:38.258 15:06:33 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:38.258 15:06:33 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=84183 00:12:38.258 15:06:33 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:38.258 15:06:33 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 84183 00:12:38.258 15:06:33 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 84183 ']' 00:12:38.258 15:06:33 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.258 15:06:33 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.258 15:06:33 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.258 15:06:33 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.258 15:06:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.516 [2024-07-23 15:06:33.723534] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:38.516 [2024-07-23 15:06:33.723703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84183 ] 00:12:38.516 [2024-07-23 15:06:33.864142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.516 [2024-07-23 15:06:33.908779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.474 15:06:34 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.474 15:06:34 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:39.474 15:06:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:39.474 15:06:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:39.474 15:06:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:39.474 15:06:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:39.474 15:06:34 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:39.474 15:06:34 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:39.474 15:06:34 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.474 15:06:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.474 ************************************ 00:12:39.474 START TEST accel_assign_opcode 00:12:39.474 ************************************ 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:39.474 [2024-07-23 15:06:34.677536] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:39.474 [2024-07-23 15:06:34.685481] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.474 15:06:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.475 software 00:12:39.475 00:12:39.475 real 0m0.215s 00:12:39.475 user 0m0.015s 00:12:39.475 sys 0m0.009s 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.475 15:06:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 ************************************ 00:12:39.475 END TEST accel_assign_opcode 00:12:39.475 ************************************ 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:39.732 15:06:34 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 84183 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 84183 ']' 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 84183 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84183 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:39.732 killing process with pid 84183 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84183' 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@967 -- # kill 84183 00:12:39.732 15:06:34 accel_rpc -- common/autotest_common.sh@972 -- # wait 84183 00:12:39.991 00:12:39.991 real 0m1.778s 00:12:39.991 user 0m1.759s 00:12:39.991 sys 0m0.517s 00:12:39.991 15:06:35 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.991 15:06:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.991 ************************************ 00:12:39.991 END TEST accel_rpc 00:12:39.991 ************************************ 00:12:39.991 15:06:35 -- common/autotest_common.sh@1142 -- # return 0 00:12:39.991 15:06:35 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:39.991 15:06:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:39.991 15:06:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.991 15:06:35 -- common/autotest_common.sh@10 -- # set +x 00:12:39.991 ************************************ 00:12:39.991 START TEST app_cmdline 00:12:39.991 ************************************ 00:12:39.991 15:06:35 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:40.250 * Looking for test storage... 00:12:40.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:40.250 15:06:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:40.250 15:06:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=84277 00:12:40.250 15:06:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 84277 00:12:40.250 15:06:35 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 84277 ']' 00:12:40.250 15:06:35 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.250 15:06:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:40.250 15:06:35 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.250 15:06:35 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.250 15:06:35 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.250 15:06:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:40.250 [2024-07-23 15:06:35.559473] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:40.250 [2024-07-23 15:06:35.559616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84277 ] 00:12:40.508 [2024-07-23 15:06:35.706206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.508 [2024-07-23 15:06:35.762773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:41.441 { 00:12:41.441 "version": "SPDK v24.09-pre git sha1 b8378f94e", 00:12:41.441 "fields": { 00:12:41.441 "major": 24, 00:12:41.441 "minor": 9, 00:12:41.441 "patch": 0, 00:12:41.441 "suffix": "-pre", 00:12:41.441 "commit": "b8378f94e" 00:12:41.441 } 00:12:41.441 } 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:41.441 15:06:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:41.441 15:06:36 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:41.699 request: 00:12:41.699 { 00:12:41.699 "method": "env_dpdk_get_mem_stats", 00:12:41.699 "req_id": 1 00:12:41.699 } 00:12:41.699 Got JSON-RPC error response 00:12:41.699 response: 00:12:41.699 { 00:12:41.699 "code": -32601, 00:12:41.699 "message": "Method not found" 00:12:41.699 } 00:12:41.699 15:06:36 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:12:41.699 15:06:36 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.699 15:06:36 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.699 15:06:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 84277 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 84277 ']' 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 84277 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84277 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.699 killing process with pid 84277 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84277' 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@967 -- # kill 84277 00:12:41.699 15:06:37 app_cmdline -- common/autotest_common.sh@972 -- # wait 84277 00:12:42.264 00:12:42.264 real 0m2.031s 00:12:42.265 user 0m2.356s 00:12:42.265 sys 0m0.607s 00:12:42.265 15:06:37 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.265 15:06:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:42.265 ************************************ 00:12:42.265 END TEST app_cmdline 00:12:42.265 ************************************ 00:12:42.265 15:06:37 -- common/autotest_common.sh@1142 -- # return 0 00:12:42.265 15:06:37 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:42.265 15:06:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:42.265 15:06:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.265 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:12:42.265 ************************************ 00:12:42.265 START TEST version 00:12:42.265 ************************************ 00:12:42.265 15:06:37 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:42.265 * Looking for test storage... 00:12:42.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:42.265 15:06:37 version -- app/version.sh@17 -- # get_header_version major 00:12:42.265 15:06:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:42.265 15:06:37 version -- app/version.sh@14 -- # tr -d '"' 00:12:42.265 15:06:37 version -- app/version.sh@14 -- # cut -f2 00:12:42.265 15:06:37 version -- app/version.sh@17 -- # major=24 00:12:42.265 15:06:37 version -- app/version.sh@18 -- # get_header_version minor 00:12:42.265 15:06:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:42.265 15:06:37 version -- app/version.sh@14 -- # cut -f2 00:12:42.265 15:06:37 version -- app/version.sh@14 -- # tr -d '"' 00:12:42.265 15:06:37 version -- app/version.sh@18 -- # minor=9 00:12:42.265 15:06:37 version -- app/version.sh@19 -- # get_header_version patch 00:12:42.265 15:06:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:42.265 15:06:37 version -- app/version.sh@14 -- # tr -d '"' 00:12:42.265 15:06:37 version -- app/version.sh@14 -- # cut -f2 00:12:42.265 15:06:37 version -- app/version.sh@19 -- # patch=0 00:12:42.265 15:06:37 version -- app/version.sh@20 -- # get_header_version suffix 00:12:42.265 15:06:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:42.265 15:06:37 version -- app/version.sh@14 -- # cut -f2 00:12:42.265 15:06:37 version -- app/version.sh@14 -- # tr -d '"' 00:12:42.265 15:06:37 version -- app/version.sh@20 -- # suffix=-pre 00:12:42.265 15:06:37 version -- app/version.sh@22 -- # version=24.9 00:12:42.265 15:06:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:42.265 15:06:37 version -- app/version.sh@28 -- # version=24.9rc0 00:12:42.265 15:06:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:42.265 15:06:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:42.265 15:06:37 version -- app/version.sh@30 -- # py_version=24.9rc0 00:12:42.265 15:06:37 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:12:42.265 00:12:42.265 real 0m0.171s 00:12:42.265 user 0m0.088s 00:12:42.265 sys 0m0.125s 00:12:42.265 15:06:37 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.265 15:06:37 version -- common/autotest_common.sh@10 -- # set +x 00:12:42.265 ************************************ 00:12:42.265 END TEST version 00:12:42.265 ************************************ 00:12:42.523 15:06:37 -- common/autotest_common.sh@1142 -- # return 0 00:12:42.523 15:06:37 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:12:42.523 15:06:37 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:42.523 15:06:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:42.523 15:06:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.523 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:12:42.523 ************************************ 00:12:42.523 START TEST blockdev_general 00:12:42.523 ************************************ 00:12:42.523 15:06:37 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:42.523 * Looking for test storage... 00:12:42.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:42.523 15:06:37 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=84420 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 84420 00:12:42.523 15:06:37 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 84420 ']' 00:12:42.523 15:06:37 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:42.523 15:06:37 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.523 15:06:37 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.523 15:06:37 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.523 15:06:37 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.523 15:06:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:42.523 [2024-07-23 15:06:37.895157] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:42.523 [2024-07-23 15:06:37.895345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84420 ] 00:12:42.781 [2024-07-23 15:06:38.049228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.781 [2024-07-23 15:06:38.095841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.716 15:06:38 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.716 15:06:38 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:12:43.716 15:06:38 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:43.716 15:06:38 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:12:43.716 15:06:38 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:12:43.716 15:06:38 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.716 15:06:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:43.716 [2024-07-23 15:06:39.057519] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:43.716 [2024-07-23 15:06:39.057600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:43.716 00:12:43.716 [2024-07-23 15:06:39.065457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:43.716 [2024-07-23 15:06:39.065510] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:43.716 00:12:43.716 Malloc0 00:12:43.716 Malloc1 00:12:43.716 Malloc2 00:12:43.716 Malloc3 00:12:43.974 Malloc4 00:12:43.974 Malloc5 00:12:43.974 Malloc6 00:12:43.974 Malloc7 00:12:43.974 Malloc8 00:12:43.974 Malloc9 00:12:43.974 [2024-07-23 15:06:39.220100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:43.974 [2024-07-23 15:06:39.220175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.974 [2024-07-23 15:06:39.220204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:12:43.974 [2024-07-23 15:06:39.220225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.974 [2024-07-23 15:06:39.222702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.974 [2024-07-23 15:06:39.222753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:43.974 TestPT 00:12:43.974 15:06:39 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.974 15:06:39 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:43.974 5000+0 records in 00:12:43.974 5000+0 records out 00:12:43.974 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0176041 s, 582 MB/s 00:12:43.974 15:06:39 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:43.975 AIO0 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.975 15:06:39 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.975 15:06:39 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:12:43.975 15:06:39 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.975 15:06:39 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.975 15:06:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:44.232 15:06:39 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.232 15:06:39 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:44.232 15:06:39 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.232 15:06:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:44.232 15:06:39 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.232 15:06:39 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:44.232 15:06:39 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:44.232 15:06:39 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:44.232 15:06:39 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.233 15:06:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:44.492 15:06:39 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.492 15:06:39 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:44.492 15:06:39 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:44.494 15:06:39 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6ed08f83-2e58-423b-a676-158b40f961f8"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6ed08f83-2e58-423b-a676-158b40f961f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "215a85f1-74fc-5ad9-a777-d05e0fdd2f8b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "215a85f1-74fc-5ad9-a777-d05e0fdd2f8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "97d9bb37-1f40-5356-a537-b4a01220a3ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "97d9bb37-1f40-5356-a537-b4a01220a3ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "316e264f-b67a-5baa-9965-265906faddf6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "316e264f-b67a-5baa-9965-265906faddf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d3133654-23c3-5255-bf7b-0995d3031b0f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d3133654-23c3-5255-bf7b-0995d3031b0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "4ad9760a-6d3f-57e4-80b8-2edcd71783f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4ad9760a-6d3f-57e4-80b8-2edcd71783f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7faed77-fb3a-541e-841f-a35ca1ac388f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7faed77-fb3a-541e-841f-a35ca1ac388f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "0fcd7496-ac16-57d8-96c1-6e9697dc2882"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0fcd7496-ac16-57d8-96c1-6e9697dc2882",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8950dae5-cf9f-5268-8947-c59b0cfc1602"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8950dae5-cf9f-5268-8947-c59b0cfc1602",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "3eeec138-67da-5ee0-8c4e-35fe91dd29fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3eeec138-67da-5ee0-8c4e-35fe91dd29fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "9ae30072-323c-51d1-8ba5-0ae4dedb7b26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ae30072-323c-51d1-8ba5-0ae4dedb7b26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5c754d55-26fa-5e37-abcb-304690d3090d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5c754d55-26fa-5e37-abcb-304690d3090d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "43c1d862-664d-478d-87bc-76b4efdda2d1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "43c1d862-664d-478d-87bc-76b4efdda2d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "43c1d862-664d-478d-87bc-76b4efdda2d1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a92df594-793b-4cfa-9273-d36441be2288",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "666e2b5c-aac5-42ce-9325-04ccfe8b4f29",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f0c167e5-fa78-474a-8a36-343d929d9bdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ac2d10b2-5f05-492c-9b54-705b025ed513",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b450309b-2582-474d-8132-0ef02c8fa549"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b450309b-2582-474d-8132-0ef02c8fa549",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b450309b-2582-474d-8132-0ef02c8fa549",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d7de5adc-3867-4fd1-9665-34d465fcc610",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b2a403d6-81b2-486c-a6dd-d478093bba12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a46a4e7c-37b0-4dca-bfc8-85c0eb534458"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a46a4e7c-37b0-4dca-bfc8-85c0eb534458",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:44.494 15:06:39 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:44.494 15:06:39 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:12:44.494 15:06:39 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:44.494 15:06:39 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 84420 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 84420 ']' 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 84420 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84420 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.494 killing process with pid 84420 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84420' 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@967 -- # kill 84420 00:12:44.494 15:06:39 blockdev_general -- common/autotest_common.sh@972 -- # wait 84420 00:12:45.061 15:06:40 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:45.061 15:06:40 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:45.061 15:06:40 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:45.061 15:06:40 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.061 15:06:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 ************************************ 00:12:45.061 START TEST bdev_hello_world 00:12:45.061 ************************************ 00:12:45.061 15:06:40 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:45.061 [2024-07-23 15:06:40.384113] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:45.061 [2024-07-23 15:06:40.384326] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84472 ] 00:12:45.319 [2024-07-23 15:06:40.537091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.319 [2024-07-23 15:06:40.584382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.319 [2024-07-23 15:06:40.711172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:45.319 [2024-07-23 15:06:40.711262] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:45.319 [2024-07-23 15:06:40.719090] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:45.319 [2024-07-23 15:06:40.719141] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:45.319 [2024-07-23 15:06:40.727105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:45.319 [2024-07-23 15:06:40.727153] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:45.319 [2024-07-23 15:06:40.727170] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:45.578 [2024-07-23 15:06:40.811018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:45.578 [2024-07-23 15:06:40.811099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.578 [2024-07-23 15:06:40.811122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:12:45.578 [2024-07-23 15:06:40.811135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.578 [2024-07-23 15:06:40.813540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.578 [2024-07-23 15:06:40.813588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:45.578 [2024-07-23 15:06:40.953489] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:45.578 [2024-07-23 15:06:40.953554] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:45.578 [2024-07-23 15:06:40.953633] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:45.578 [2024-07-23 15:06:40.953688] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:45.578 [2024-07-23 15:06:40.953747] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:45.578 [2024-07-23 15:06:40.953765] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:45.578 [2024-07-23 15:06:40.953825] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:45.578 00:12:45.578 [2024-07-23 15:06:40.953853] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:46.185 00:12:46.185 real 0m1.020s 00:12:46.185 user 0m0.576s 00:12:46.185 sys 0m0.320s 00:12:46.185 15:06:41 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.185 15:06:41 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:46.185 ************************************ 00:12:46.185 END TEST bdev_hello_world 00:12:46.185 ************************************ 00:12:46.185 15:06:41 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:46.185 15:06:41 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:46.185 15:06:41 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:46.185 15:06:41 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.185 15:06:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:46.185 ************************************ 00:12:46.185 START TEST bdev_bounds 00:12:46.185 ************************************ 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=84503 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:46.185 Process bdevio pid: 84503 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 84503' 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 84503 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 84503 ']' 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.185 15:06:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:46.185 [2024-07-23 15:06:41.460065] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:46.185 [2024-07-23 15:06:41.460267] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84503 ] 00:12:46.185 [2024-07-23 15:06:41.610927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:46.444 [2024-07-23 15:06:41.658051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.444 [2024-07-23 15:06:41.657984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.444 [2024-07-23 15:06:41.658175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.444 [2024-07-23 15:06:41.785765] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.444 [2024-07-23 15:06:41.785861] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.444 [2024-07-23 15:06:41.793700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.444 [2024-07-23 15:06:41.793747] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.444 [2024-07-23 15:06:41.801714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.444 [2024-07-23 15:06:41.801759] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:46.444 [2024-07-23 15:06:41.801806] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:46.702 [2024-07-23 15:06:41.888732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.702 [2024-07-23 15:06:41.888825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.702 [2024-07-23 15:06:41.888855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:12:46.702 [2024-07-23 15:06:41.888868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.702 [2024-07-23 15:06:41.891650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.702 [2024-07-23 15:06:41.891692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:46.960 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.960 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:12:46.960 15:06:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:47.218 I/O targets: 00:12:47.218 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:47.218 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:47.218 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:47.218 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:47.218 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:47.218 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:47.218 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:47.218 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:47.218 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:47.218 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:47.218 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:47.218 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:47.218 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:47.218 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:47.218 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:47.218 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:47.218 00:12:47.218 00:12:47.218 CUnit - A unit testing framework for C - Version 2.1-3 00:12:47.218 http://cunit.sourceforge.net/ 00:12:47.218 00:12:47.218 00:12:47.218 Suite: bdevio tests on: AIO0 00:12:47.218 Test: blockdev write read block ...passed 00:12:47.218 Test: blockdev write zeroes read block ...passed 00:12:47.218 Test: blockdev write zeroes read no split ...passed 00:12:47.218 Test: blockdev write zeroes read split ...passed 00:12:47.218 Test: blockdev write zeroes read split partial ...passed 00:12:47.218 Test: blockdev reset ...passed 00:12:47.218 Test: blockdev write read 8 blocks ...passed 00:12:47.218 Test: blockdev write read size > 128k ...passed 00:12:47.218 Test: blockdev write read invalid size ...passed 00:12:47.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.218 Test: blockdev write read max offset ...passed 00:12:47.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.218 Test: blockdev writev readv 8 blocks ...passed 00:12:47.218 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.218 Test: blockdev writev readv block ...passed 00:12:47.218 Test: blockdev writev readv size > 128k ...passed 00:12:47.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.218 Test: blockdev comparev and writev ...passed 00:12:47.218 Test: blockdev nvme passthru rw ...passed 00:12:47.218 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.218 Test: blockdev nvme admin passthru ...passed 00:12:47.218 Test: blockdev copy ...passed 00:12:47.218 Suite: bdevio tests on: raid1 00:12:47.219 Test: blockdev write read block ...passed 00:12:47.219 Test: blockdev write zeroes read block ...passed 00:12:47.219 Test: blockdev write zeroes read no split ...passed 00:12:47.219 Test: blockdev write zeroes read split ...passed 00:12:47.219 Test: blockdev write zeroes read split partial ...passed 00:12:47.219 Test: blockdev reset ...passed 00:12:47.219 Test: blockdev write read 8 blocks ...passed 00:12:47.219 Test: blockdev write read size > 128k ...passed 00:12:47.219 Test: blockdev write read invalid size ...passed 00:12:47.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.219 Test: blockdev write read max offset ...passed 00:12:47.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.219 Test: blockdev writev readv 8 blocks ...passed 00:12:47.219 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.219 Test: blockdev writev readv block ...passed 00:12:47.219 Test: blockdev writev readv size > 128k ...passed 00:12:47.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.219 Test: blockdev comparev and writev ...passed 00:12:47.219 Test: blockdev nvme passthru rw ...passed 00:12:47.219 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.219 Test: blockdev nvme admin passthru ...passed 00:12:47.219 Test: blockdev copy ...passed 00:12:47.219 Suite: bdevio tests on: concat0 00:12:47.219 Test: blockdev write read block ...passed 00:12:47.219 Test: blockdev write zeroes read block ...passed 00:12:47.219 Test: blockdev write zeroes read no split ...passed 00:12:47.219 Test: blockdev write zeroes read split ...passed 00:12:47.219 Test: blockdev write zeroes read split partial ...passed 00:12:47.219 Test: blockdev reset ...passed 00:12:47.219 Test: blockdev write read 8 blocks ...passed 00:12:47.219 Test: blockdev write read size > 128k ...passed 00:12:47.219 Test: blockdev write read invalid size ...passed 00:12:47.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.219 Test: blockdev write read max offset ...passed 00:12:47.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.219 Test: blockdev writev readv 8 blocks ...passed 00:12:47.219 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.219 Test: blockdev writev readv block ...passed 00:12:47.219 Test: blockdev writev readv size > 128k ...passed 00:12:47.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.219 Test: blockdev comparev and writev ...passed 00:12:47.219 Test: blockdev nvme passthru rw ...passed 00:12:47.219 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.219 Test: blockdev nvme admin passthru ...passed 00:12:47.219 Test: blockdev copy ...passed 00:12:47.219 Suite: bdevio tests on: raid0 00:12:47.219 Test: blockdev write read block ...passed 00:12:47.219 Test: blockdev write zeroes read block ...passed 00:12:47.219 Test: blockdev write zeroes read no split ...passed 00:12:47.219 Test: blockdev write zeroes read split ...passed 00:12:47.219 Test: blockdev write zeroes read split partial ...passed 00:12:47.219 Test: blockdev reset ...passed 00:12:47.219 Test: blockdev write read 8 blocks ...passed 00:12:47.219 Test: blockdev write read size > 128k ...passed 00:12:47.219 Test: blockdev write read invalid size ...passed 00:12:47.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.219 Test: blockdev write read max offset ...passed 00:12:47.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.219 Test: blockdev writev readv 8 blocks ...passed 00:12:47.219 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.219 Test: blockdev writev readv block ...passed 00:12:47.219 Test: blockdev writev readv size > 128k ...passed 00:12:47.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.219 Test: blockdev comparev and writev ...passed 00:12:47.219 Test: blockdev nvme passthru rw ...passed 00:12:47.219 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.219 Test: blockdev nvme admin passthru ...passed 00:12:47.219 Test: blockdev copy ...passed 00:12:47.219 Suite: bdevio tests on: TestPT 00:12:47.219 Test: blockdev write read block ...passed 00:12:47.219 Test: blockdev write zeroes read block ...passed 00:12:47.219 Test: blockdev write zeroes read no split ...passed 00:12:47.219 Test: blockdev write zeroes read split ...passed 00:12:47.219 Test: blockdev write zeroes read split partial ...passed 00:12:47.219 Test: blockdev reset ...passed 00:12:47.219 Test: blockdev write read 8 blocks ...passed 00:12:47.219 Test: blockdev write read size > 128k ...passed 00:12:47.219 Test: blockdev write read invalid size ...passed 00:12:47.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.219 Test: blockdev write read max offset ...passed 00:12:47.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.219 Test: blockdev writev readv 8 blocks ...passed 00:12:47.219 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.219 Test: blockdev writev readv block ...passed 00:12:47.219 Test: blockdev writev readv size > 128k ...passed 00:12:47.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.219 Test: blockdev comparev and writev ...passed 00:12:47.219 Test: blockdev nvme passthru rw ...passed 00:12:47.219 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.219 Test: blockdev nvme admin passthru ...passed 00:12:47.219 Test: blockdev copy ...passed 00:12:47.219 Suite: bdevio tests on: Malloc2p7 00:12:47.219 Test: blockdev write read block ...passed 00:12:47.219 Test: blockdev write zeroes read block ...passed 00:12:47.219 Test: blockdev write zeroes read no split ...passed 00:12:47.219 Test: blockdev write zeroes read split ...passed 00:12:47.219 Test: blockdev write zeroes read split partial ...passed 00:12:47.219 Test: blockdev reset ...passed 00:12:47.219 Test: blockdev write read 8 blocks ...passed 00:12:47.219 Test: blockdev write read size > 128k ...passed 00:12:47.219 Test: blockdev write read invalid size ...passed 00:12:47.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.219 Test: blockdev write read max offset ...passed 00:12:47.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.219 Test: blockdev writev readv 8 blocks ...passed 00:12:47.219 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.219 Test: blockdev writev readv block ...passed 00:12:47.219 Test: blockdev writev readv size > 128k ...passed 00:12:47.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.219 Test: blockdev comparev and writev ...passed 00:12:47.219 Test: blockdev nvme passthru rw ...passed 00:12:47.219 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.219 Test: blockdev nvme admin passthru ...passed 00:12:47.219 Test: blockdev copy ...passed 00:12:47.219 Suite: bdevio tests on: Malloc2p6 00:12:47.219 Test: blockdev write read block ...passed 00:12:47.219 Test: blockdev write zeroes read block ...passed 00:12:47.219 Test: blockdev write zeroes read no split ...passed 00:12:47.219 Test: blockdev write zeroes read split ...passed 00:12:47.219 Test: blockdev write zeroes read split partial ...passed 00:12:47.219 Test: blockdev reset ...passed 00:12:47.219 Test: blockdev write read 8 blocks ...passed 00:12:47.219 Test: blockdev write read size > 128k ...passed 00:12:47.219 Test: blockdev write read invalid size ...passed 00:12:47.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.219 Test: blockdev write read max offset ...passed 00:12:47.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.219 Test: blockdev writev readv 8 blocks ...passed 00:12:47.219 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.219 Test: blockdev writev readv block ...passed 00:12:47.219 Test: blockdev writev readv size > 128k ...passed 00:12:47.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.219 Test: blockdev comparev and writev ...passed 00:12:47.219 Test: blockdev nvme passthru rw ...passed 00:12:47.219 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.219 Test: blockdev nvme admin passthru ...passed 00:12:47.219 Test: blockdev copy ...passed 00:12:47.219 Suite: bdevio tests on: Malloc2p5 00:12:47.219 Test: blockdev write read block ...passed 00:12:47.219 Test: blockdev write zeroes read block ...passed 00:12:47.219 Test: blockdev write zeroes read no split ...passed 00:12:47.479 Test: blockdev write zeroes read split ...passed 00:12:47.479 Test: blockdev write zeroes read split partial ...passed 00:12:47.479 Test: blockdev reset ...passed 00:12:47.479 Test: blockdev write read 8 blocks ...passed 00:12:47.479 Test: blockdev write read size > 128k ...passed 00:12:47.479 Test: blockdev write read invalid size ...passed 00:12:47.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.479 Test: blockdev write read max offset ...passed 00:12:47.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.479 Test: blockdev writev readv 8 blocks ...passed 00:12:47.479 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.479 Test: blockdev writev readv block ...passed 00:12:47.479 Test: blockdev writev readv size > 128k ...passed 00:12:47.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.479 Test: blockdev comparev and writev ...passed 00:12:47.479 Test: blockdev nvme passthru rw ...passed 00:12:47.479 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.479 Test: blockdev nvme admin passthru ...passed 00:12:47.479 Test: blockdev copy ...passed 00:12:47.479 Suite: bdevio tests on: Malloc2p4 00:12:47.479 Test: blockdev write read block ...passed 00:12:47.479 Test: blockdev write zeroes read block ...passed 00:12:47.479 Test: blockdev write zeroes read no split ...passed 00:12:47.479 Test: blockdev write zeroes read split ...passed 00:12:47.479 Test: blockdev write zeroes read split partial ...passed 00:12:47.479 Test: blockdev reset ...passed 00:12:47.479 Test: blockdev write read 8 blocks ...passed 00:12:47.479 Test: blockdev write read size > 128k ...passed 00:12:47.479 Test: blockdev write read invalid size ...passed 00:12:47.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.479 Test: blockdev write read max offset ...passed 00:12:47.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.479 Test: blockdev writev readv 8 blocks ...passed 00:12:47.479 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.479 Test: blockdev writev readv block ...passed 00:12:47.479 Test: blockdev writev readv size > 128k ...passed 00:12:47.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.479 Test: blockdev comparev and writev ...passed 00:12:47.479 Test: blockdev nvme passthru rw ...passed 00:12:47.479 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.479 Test: blockdev nvme admin passthru ...passed 00:12:47.479 Test: blockdev copy ...passed 00:12:47.479 Suite: bdevio tests on: Malloc2p3 00:12:47.479 Test: blockdev write read block ...passed 00:12:47.479 Test: blockdev write zeroes read block ...passed 00:12:47.479 Test: blockdev write zeroes read no split ...passed 00:12:47.479 Test: blockdev write zeroes read split ...passed 00:12:47.479 Test: blockdev write zeroes read split partial ...passed 00:12:47.479 Test: blockdev reset ...passed 00:12:47.479 Test: blockdev write read 8 blocks ...passed 00:12:47.479 Test: blockdev write read size > 128k ...passed 00:12:47.479 Test: blockdev write read invalid size ...passed 00:12:47.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.479 Test: blockdev write read max offset ...passed 00:12:47.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.479 Test: blockdev writev readv 8 blocks ...passed 00:12:47.479 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.479 Test: blockdev writev readv block ...passed 00:12:47.479 Test: blockdev writev readv size > 128k ...passed 00:12:47.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.479 Test: blockdev comparev and writev ...passed 00:12:47.479 Test: blockdev nvme passthru rw ...passed 00:12:47.479 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.479 Test: blockdev nvme admin passthru ...passed 00:12:47.479 Test: blockdev copy ...passed 00:12:47.479 Suite: bdevio tests on: Malloc2p2 00:12:47.479 Test: blockdev write read block ...passed 00:12:47.479 Test: blockdev write zeroes read block ...passed 00:12:47.479 Test: blockdev write zeroes read no split ...passed 00:12:47.479 Test: blockdev write zeroes read split ...passed 00:12:47.479 Test: blockdev write zeroes read split partial ...passed 00:12:47.479 Test: blockdev reset ...passed 00:12:47.479 Test: blockdev write read 8 blocks ...passed 00:12:47.479 Test: blockdev write read size > 128k ...passed 00:12:47.479 Test: blockdev write read invalid size ...passed 00:12:47.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.479 Test: blockdev write read max offset ...passed 00:12:47.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.479 Test: blockdev writev readv 8 blocks ...passed 00:12:47.479 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.479 Test: blockdev writev readv block ...passed 00:12:47.479 Test: blockdev writev readv size > 128k ...passed 00:12:47.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.479 Test: blockdev comparev and writev ...passed 00:12:47.479 Test: blockdev nvme passthru rw ...passed 00:12:47.479 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.479 Test: blockdev nvme admin passthru ...passed 00:12:47.479 Test: blockdev copy ...passed 00:12:47.479 Suite: bdevio tests on: Malloc2p1 00:12:47.479 Test: blockdev write read block ...passed 00:12:47.479 Test: blockdev write zeroes read block ...passed 00:12:47.479 Test: blockdev write zeroes read no split ...passed 00:12:47.479 Test: blockdev write zeroes read split ...passed 00:12:47.479 Test: blockdev write zeroes read split partial ...passed 00:12:47.479 Test: blockdev reset ...passed 00:12:47.479 Test: blockdev write read 8 blocks ...passed 00:12:47.479 Test: blockdev write read size > 128k ...passed 00:12:47.479 Test: blockdev write read invalid size ...passed 00:12:47.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.479 Test: blockdev write read max offset ...passed 00:12:47.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.479 Test: blockdev writev readv 8 blocks ...passed 00:12:47.479 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.479 Test: blockdev writev readv block ...passed 00:12:47.479 Test: blockdev writev readv size > 128k ...passed 00:12:47.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.479 Test: blockdev comparev and writev ...passed 00:12:47.479 Test: blockdev nvme passthru rw ...passed 00:12:47.479 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.479 Test: blockdev nvme admin passthru ...passed 00:12:47.479 Test: blockdev copy ...passed 00:12:47.479 Suite: bdevio tests on: Malloc2p0 00:12:47.479 Test: blockdev write read block ...passed 00:12:47.479 Test: blockdev write zeroes read block ...passed 00:12:47.479 Test: blockdev write zeroes read no split ...passed 00:12:47.479 Test: blockdev write zeroes read split ...passed 00:12:47.479 Test: blockdev write zeroes read split partial ...passed 00:12:47.479 Test: blockdev reset ...passed 00:12:47.479 Test: blockdev write read 8 blocks ...passed 00:12:47.479 Test: blockdev write read size > 128k ...passed 00:12:47.479 Test: blockdev write read invalid size ...passed 00:12:47.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.479 Test: blockdev write read max offset ...passed 00:12:47.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.479 Test: blockdev writev readv 8 blocks ...passed 00:12:47.479 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.479 Test: blockdev writev readv block ...passed 00:12:47.479 Test: blockdev writev readv size > 128k ...passed 00:12:47.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.479 Test: blockdev comparev and writev ...passed 00:12:47.479 Test: blockdev nvme passthru rw ...passed 00:12:47.479 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.479 Test: blockdev nvme admin passthru ...passed 00:12:47.479 Test: blockdev copy ...passed 00:12:47.479 Suite: bdevio tests on: Malloc1p1 00:12:47.479 Test: blockdev write read block ...passed 00:12:47.479 Test: blockdev write zeroes read block ...passed 00:12:47.479 Test: blockdev write zeroes read no split ...passed 00:12:47.479 Test: blockdev write zeroes read split ...passed 00:12:47.479 Test: blockdev write zeroes read split partial ...passed 00:12:47.479 Test: blockdev reset ...passed 00:12:47.479 Test: blockdev write read 8 blocks ...passed 00:12:47.479 Test: blockdev write read size > 128k ...passed 00:12:47.479 Test: blockdev write read invalid size ...passed 00:12:47.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.479 Test: blockdev write read max offset ...passed 00:12:47.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.479 Test: blockdev writev readv 8 blocks ...passed 00:12:47.479 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.479 Test: blockdev writev readv block ...passed 00:12:47.479 Test: blockdev writev readv size > 128k ...passed 00:12:47.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.479 Test: blockdev comparev and writev ...passed 00:12:47.479 Test: blockdev nvme passthru rw ...passed 00:12:47.479 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.479 Test: blockdev nvme admin passthru ...passed 00:12:47.479 Test: blockdev copy ...passed 00:12:47.479 Suite: bdevio tests on: Malloc1p0 00:12:47.479 Test: blockdev write read block ...passed 00:12:47.479 Test: blockdev write zeroes read block ...passed 00:12:47.479 Test: blockdev write zeroes read no split ...passed 00:12:47.479 Test: blockdev write zeroes read split ...passed 00:12:47.479 Test: blockdev write zeroes read split partial ...passed 00:12:47.479 Test: blockdev reset ...passed 00:12:47.479 Test: blockdev write read 8 blocks ...passed 00:12:47.480 Test: blockdev write read size > 128k ...passed 00:12:47.480 Test: blockdev write read invalid size ...passed 00:12:47.480 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.480 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.480 Test: blockdev write read max offset ...passed 00:12:47.480 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.480 Test: blockdev writev readv 8 blocks ...passed 00:12:47.480 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.480 Test: blockdev writev readv block ...passed 00:12:47.480 Test: blockdev writev readv size > 128k ...passed 00:12:47.480 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.480 Test: blockdev comparev and writev ...passed 00:12:47.480 Test: blockdev nvme passthru rw ...passed 00:12:47.480 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.480 Test: blockdev nvme admin passthru ...passed 00:12:47.480 Test: blockdev copy ...passed 00:12:47.480 Suite: bdevio tests on: Malloc0 00:12:47.480 Test: blockdev write read block ...passed 00:12:47.480 Test: blockdev write zeroes read block ...passed 00:12:47.480 Test: blockdev write zeroes read no split ...passed 00:12:47.480 Test: blockdev write zeroes read split ...passed 00:12:47.480 Test: blockdev write zeroes read split partial ...passed 00:12:47.480 Test: blockdev reset ...passed 00:12:47.480 Test: blockdev write read 8 blocks ...passed 00:12:47.480 Test: blockdev write read size > 128k ...passed 00:12:47.480 Test: blockdev write read invalid size ...passed 00:12:47.480 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.480 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.480 Test: blockdev write read max offset ...passed 00:12:47.480 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.480 Test: blockdev writev readv 8 blocks ...passed 00:12:47.480 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.480 Test: blockdev writev readv block ...passed 00:12:47.480 Test: blockdev writev readv size > 128k ...passed 00:12:47.480 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:47.480 Test: blockdev comparev and writev ...passed 00:12:47.480 Test: blockdev nvme passthru rw ...passed 00:12:47.480 Test: blockdev nvme passthru vendor specific ...passed 00:12:47.480 Test: blockdev nvme admin passthru ...passed 00:12:47.480 Test: blockdev copy ...passed 00:12:47.480 00:12:47.480 Run Summary: Type Total Ran Passed Failed Inactive 00:12:47.480 suites 16 16 n/a 0 0 00:12:47.480 tests 368 368 368 0 0 00:12:47.480 asserts 2224 2224 2224 0 n/a 00:12:47.480 00:12:47.480 Elapsed time = 0.704 seconds 00:12:47.480 0 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 84503 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 84503 ']' 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 84503 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84503 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:47.480 killing process with pid 84503 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84503' 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 84503 00:12:47.480 15:06:42 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 84503 00:12:47.738 15:06:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:47.738 00:12:47.738 real 0m1.761s 00:12:47.738 user 0m4.180s 00:12:47.738 sys 0m0.546s 00:12:47.738 15:06:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.738 15:06:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:47.738 ************************************ 00:12:47.738 END TEST bdev_bounds 00:12:47.738 ************************************ 00:12:47.996 15:06:43 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:47.996 15:06:43 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:47.996 15:06:43 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:47.996 15:06:43 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.996 15:06:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:47.996 ************************************ 00:12:47.996 START TEST bdev_nbd 00:12:47.996 ************************************ 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=84559 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 84559 /var/tmp/spdk-nbd.sock 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 84559 ']' 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.996 15:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:47.996 [2024-07-23 15:06:43.291039] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:12:47.996 [2024-07-23 15:06:43.291223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.254 [2024-07-23 15:06:43.445401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.254 [2024-07-23 15:06:43.495090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.254 [2024-07-23 15:06:43.622066] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:48.254 [2024-07-23 15:06:43.622143] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:48.254 [2024-07-23 15:06:43.630011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:48.254 [2024-07-23 15:06:43.630056] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:48.254 [2024-07-23 15:06:43.638048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:48.254 [2024-07-23 15:06:43.638094] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:48.254 [2024-07-23 15:06:43.638120] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:48.511 [2024-07-23 15:06:43.721655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:48.511 [2024-07-23 15:06:43.721727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.511 [2024-07-23 15:06:43.721753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:12:48.511 [2024-07-23 15:06:43.721765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.511 [2024-07-23 15:06:43.724238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.511 [2024-07-23 15:06:43.724276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.077 1+0 records in 00:12:49.077 1+0 records out 00:12:49.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295125 s, 13.9 MB/s 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.077 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.336 1+0 records in 00:12:49.336 1+0 records out 00:12:49.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240779 s, 17.0 MB/s 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.336 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:49.594 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:49.594 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:49.594 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:49.594 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:49.594 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.595 1+0 records in 00:12:49.595 1+0 records out 00:12:49.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423868 s, 9.7 MB/s 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.595 15:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.853 1+0 records in 00:12:49.853 1+0 records out 00:12:49.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028239 s, 14.5 MB/s 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:49.853 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.111 1+0 records in 00:12:50.111 1+0 records out 00:12:50.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461649 s, 8.9 MB/s 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:50.111 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.370 1+0 records in 00:12:50.370 1+0 records out 00:12:50.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405513 s, 10.1 MB/s 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:50.370 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.628 1+0 records in 00:12:50.628 1+0 records out 00:12:50.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361781 s, 11.3 MB/s 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:50.628 15:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:50.904 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.905 1+0 records in 00:12:50.905 1+0 records out 00:12:50.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500283 s, 8.2 MB/s 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:50.905 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.163 1+0 records in 00:12:51.163 1+0 records out 00:12:51.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352722 s, 11.6 MB/s 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:51.163 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:51.164 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.164 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.164 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.423 1+0 records in 00:12:51.423 1+0 records out 00:12:51.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534918 s, 7.7 MB/s 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:51.423 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.682 1+0 records in 00:12:51.682 1+0 records out 00:12:51.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100907 s, 4.1 MB/s 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:51.682 15:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.682 1+0 records in 00:12:51.682 1+0 records out 00:12:51.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054839 s, 7.5 MB/s 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:51.682 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:51.941 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.941 1+0 records in 00:12:51.942 1+0 records out 00:12:51.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606862 s, 6.7 MB/s 00:12:51.942 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.942 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:51.942 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.942 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:51.942 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:51.942 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.942 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:51.942 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.201 1+0 records in 00:12:52.201 1+0 records out 00:12:52.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713753 s, 5.7 MB/s 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:52.201 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.460 1+0 records in 00:12:52.460 1+0 records out 00:12:52.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577536 s, 7.1 MB/s 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:52.460 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.719 1+0 records in 00:12:52.719 1+0 records out 00:12:52.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143947 s, 2.8 MB/s 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:52.719 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.720 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:52.720 15:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:52.720 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd0", 00:12:52.720 "bdev_name": "Malloc0" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd1", 00:12:52.720 "bdev_name": "Malloc1p0" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd2", 00:12:52.720 "bdev_name": "Malloc1p1" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd3", 00:12:52.720 "bdev_name": "Malloc2p0" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd4", 00:12:52.720 "bdev_name": "Malloc2p1" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd5", 00:12:52.720 "bdev_name": "Malloc2p2" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd6", 00:12:52.720 "bdev_name": "Malloc2p3" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd7", 00:12:52.720 "bdev_name": "Malloc2p4" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd8", 00:12:52.720 "bdev_name": "Malloc2p5" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd9", 00:12:52.720 "bdev_name": "Malloc2p6" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd10", 00:12:52.720 "bdev_name": "Malloc2p7" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd11", 00:12:52.720 "bdev_name": "TestPT" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd12", 00:12:52.720 "bdev_name": "raid0" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd13", 00:12:52.720 "bdev_name": "concat0" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd14", 00:12:52.720 "bdev_name": "raid1" 00:12:52.720 }, 00:12:52.720 { 00:12:52.720 "nbd_device": "/dev/nbd15", 00:12:52.720 "bdev_name": "AIO0" 00:12:52.720 } 00:12:52.720 ]' 00:12:52.720 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd0", 00:12:52.979 "bdev_name": "Malloc0" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd1", 00:12:52.979 "bdev_name": "Malloc1p0" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd2", 00:12:52.979 "bdev_name": "Malloc1p1" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd3", 00:12:52.979 "bdev_name": "Malloc2p0" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd4", 00:12:52.979 "bdev_name": "Malloc2p1" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd5", 00:12:52.979 "bdev_name": "Malloc2p2" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd6", 00:12:52.979 "bdev_name": "Malloc2p3" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd7", 00:12:52.979 "bdev_name": "Malloc2p4" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd8", 00:12:52.979 "bdev_name": "Malloc2p5" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd9", 00:12:52.979 "bdev_name": "Malloc2p6" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd10", 00:12:52.979 "bdev_name": "Malloc2p7" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd11", 00:12:52.979 "bdev_name": "TestPT" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd12", 00:12:52.979 "bdev_name": "raid0" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd13", 00:12:52.979 "bdev_name": "concat0" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd14", 00:12:52.979 "bdev_name": "raid1" 00:12:52.979 }, 00:12:52.979 { 00:12:52.979 "nbd_device": "/dev/nbd15", 00:12:52.979 "bdev_name": "AIO0" 00:12:52.979 } 00:12:52.979 ]' 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.979 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.238 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.239 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.498 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.757 15:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.757 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.016 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.276 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.535 15:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.793 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.052 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:55.311 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.570 15:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.830 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.090 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.354 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.612 15:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:56.870 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:57.129 /dev/nbd0 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.129 1+0 records in 00:12:57.129 1+0 records out 00:12:57.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491695 s, 8.3 MB/s 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:57.129 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:57.388 /dev/nbd1 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.388 1+0 records in 00:12:57.388 1+0 records out 00:12:57.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027703 s, 14.8 MB/s 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:57.388 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:57.646 /dev/nbd10 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.646 1+0 records in 00:12:57.646 1+0 records out 00:12:57.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335524 s, 12.2 MB/s 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:57.646 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.647 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:57.647 15:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:57.905 /dev/nbd11 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.905 1+0 records in 00:12:57.905 1+0 records out 00:12:57.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325041 s, 12.6 MB/s 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:57.905 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:58.165 /dev/nbd12 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.165 1+0 records in 00:12:58.165 1+0 records out 00:12:58.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319052 s, 12.8 MB/s 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.165 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:58.425 /dev/nbd13 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.425 1+0 records in 00:12:58.425 1+0 records out 00:12:58.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362986 s, 11.3 MB/s 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.425 15:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:58.683 /dev/nbd14 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.683 1+0 records in 00:12:58.683 1+0 records out 00:12:58.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499962 s, 8.2 MB/s 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.683 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:58.942 /dev/nbd15 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.942 1+0 records in 00:12:58.942 1+0 records out 00:12:58.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483448 s, 8.5 MB/s 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:58.942 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:59.200 /dev/nbd2 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.200 1+0 records in 00:12:59.200 1+0 records out 00:12:59.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620233 s, 6.6 MB/s 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:59.200 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:59.459 /dev/nbd3 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.459 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.460 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.460 1+0 records in 00:12:59.460 1+0 records out 00:12:59.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562225 s, 7.3 MB/s 00:12:59.460 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.460 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:59.460 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.719 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.719 15:06:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:59.719 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.719 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:59.719 15:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:59.719 /dev/nbd4 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.719 1+0 records in 00:12:59.719 1+0 records out 00:12:59.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396417 s, 10.3 MB/s 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:59.719 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:59.979 /dev/nbd5 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.979 1+0 records in 00:12:59.979 1+0 records out 00:12:59.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441256 s, 9.3 MB/s 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:59.979 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:13:00.238 /dev/nbd6 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.238 1+0 records in 00:13:00.238 1+0 records out 00:13:00.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073789 s, 5.6 MB/s 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:00.238 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:13:00.497 /dev/nbd7 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.497 1+0 records in 00:13:00.497 1+0 records out 00:13:00.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592032 s, 6.9 MB/s 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:00.497 15:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:13:00.757 /dev/nbd8 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.757 1+0 records in 00:13:00.757 1+0 records out 00:13:00.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554774 s, 7.4 MB/s 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:00.757 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:13:01.016 /dev/nbd9 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.016 1+0 records in 00:13:01.016 1+0 records out 00:13:01.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000983778 s, 4.2 MB/s 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.016 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd0", 00:13:01.276 "bdev_name": "Malloc0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd1", 00:13:01.276 "bdev_name": "Malloc1p0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd10", 00:13:01.276 "bdev_name": "Malloc1p1" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd11", 00:13:01.276 "bdev_name": "Malloc2p0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd12", 00:13:01.276 "bdev_name": "Malloc2p1" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd13", 00:13:01.276 "bdev_name": "Malloc2p2" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd14", 00:13:01.276 "bdev_name": "Malloc2p3" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd15", 00:13:01.276 "bdev_name": "Malloc2p4" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd2", 00:13:01.276 "bdev_name": "Malloc2p5" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd3", 00:13:01.276 "bdev_name": "Malloc2p6" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd4", 00:13:01.276 "bdev_name": "Malloc2p7" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd5", 00:13:01.276 "bdev_name": "TestPT" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd6", 00:13:01.276 "bdev_name": "raid0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd7", 00:13:01.276 "bdev_name": "concat0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd8", 00:13:01.276 "bdev_name": "raid1" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd9", 00:13:01.276 "bdev_name": "AIO0" 00:13:01.276 } 00:13:01.276 ]' 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd0", 00:13:01.276 "bdev_name": "Malloc0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd1", 00:13:01.276 "bdev_name": "Malloc1p0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd10", 00:13:01.276 "bdev_name": "Malloc1p1" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd11", 00:13:01.276 "bdev_name": "Malloc2p0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd12", 00:13:01.276 "bdev_name": "Malloc2p1" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd13", 00:13:01.276 "bdev_name": "Malloc2p2" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd14", 00:13:01.276 "bdev_name": "Malloc2p3" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd15", 00:13:01.276 "bdev_name": "Malloc2p4" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd2", 00:13:01.276 "bdev_name": "Malloc2p5" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd3", 00:13:01.276 "bdev_name": "Malloc2p6" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd4", 00:13:01.276 "bdev_name": "Malloc2p7" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd5", 00:13:01.276 "bdev_name": "TestPT" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd6", 00:13:01.276 "bdev_name": "raid0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd7", 00:13:01.276 "bdev_name": "concat0" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd8", 00:13:01.276 "bdev_name": "raid1" 00:13:01.276 }, 00:13:01.276 { 00:13:01.276 "nbd_device": "/dev/nbd9", 00:13:01.276 "bdev_name": "AIO0" 00:13:01.276 } 00:13:01.276 ]' 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:01.276 /dev/nbd1 00:13:01.276 /dev/nbd10 00:13:01.276 /dev/nbd11 00:13:01.276 /dev/nbd12 00:13:01.276 /dev/nbd13 00:13:01.276 /dev/nbd14 00:13:01.276 /dev/nbd15 00:13:01.276 /dev/nbd2 00:13:01.276 /dev/nbd3 00:13:01.276 /dev/nbd4 00:13:01.276 /dev/nbd5 00:13:01.276 /dev/nbd6 00:13:01.276 /dev/nbd7 00:13:01.276 /dev/nbd8 00:13:01.276 /dev/nbd9' 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:01.276 /dev/nbd1 00:13:01.276 /dev/nbd10 00:13:01.276 /dev/nbd11 00:13:01.276 /dev/nbd12 00:13:01.276 /dev/nbd13 00:13:01.276 /dev/nbd14 00:13:01.276 /dev/nbd15 00:13:01.276 /dev/nbd2 00:13:01.276 /dev/nbd3 00:13:01.276 /dev/nbd4 00:13:01.276 /dev/nbd5 00:13:01.276 /dev/nbd6 00:13:01.276 /dev/nbd7 00:13:01.276 /dev/nbd8 00:13:01.276 /dev/nbd9' 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:01.276 256+0 records in 00:13:01.276 256+0 records out 00:13:01.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102532 s, 102 MB/s 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.276 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:01.535 256+0 records in 00:13:01.535 256+0 records out 00:13:01.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157491 s, 6.7 MB/s 00:13:01.535 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.535 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:01.535 256+0 records in 00:13:01.535 256+0 records out 00:13:01.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157942 s, 6.6 MB/s 00:13:01.535 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.535 15:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:01.794 256+0 records in 00:13:01.794 256+0 records out 00:13:01.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167408 s, 6.3 MB/s 00:13:01.794 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.794 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:02.052 256+0 records in 00:13:02.052 256+0 records out 00:13:02.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164905 s, 6.4 MB/s 00:13:02.052 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.052 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:02.052 256+0 records in 00:13:02.052 256+0 records out 00:13:02.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163944 s, 6.4 MB/s 00:13:02.052 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.052 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:02.365 256+0 records in 00:13:02.365 256+0 records out 00:13:02.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171687 s, 6.1 MB/s 00:13:02.366 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.366 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:02.366 256+0 records in 00:13:02.366 256+0 records out 00:13:02.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167034 s, 6.3 MB/s 00:13:02.366 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.366 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:13:02.623 256+0 records in 00:13:02.623 256+0 records out 00:13:02.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161721 s, 6.5 MB/s 00:13:02.623 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.623 15:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:13:02.881 256+0 records in 00:13:02.881 256+0 records out 00:13:02.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162105 s, 6.5 MB/s 00:13:02.881 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.881 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:13:02.881 256+0 records in 00:13:02.881 256+0 records out 00:13:02.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159971 s, 6.6 MB/s 00:13:02.881 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:02.881 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:13:03.140 256+0 records in 00:13:03.140 256+0 records out 00:13:03.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15907 s, 6.6 MB/s 00:13:03.140 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.140 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:13:03.398 256+0 records in 00:13:03.398 256+0 records out 00:13:03.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167674 s, 6.3 MB/s 00:13:03.398 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.398 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:13:03.398 256+0 records in 00:13:03.398 256+0 records out 00:13:03.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168594 s, 6.2 MB/s 00:13:03.398 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.398 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:13:03.656 256+0 records in 00:13:03.656 256+0 records out 00:13:03.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167067 s, 6.3 MB/s 00:13:03.656 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.656 15:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:13:03.914 256+0 records in 00:13:03.914 256+0 records out 00:13:03.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164925 s, 6.4 MB/s 00:13:03.914 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.914 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:13:04.173 256+0 records in 00:13:04.173 256+0 records out 00:13:04.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248589 s, 4.2 MB/s 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.173 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.431 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.688 15:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.946 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.203 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.461 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.719 15:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.719 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:06.284 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.285 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.285 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.285 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.542 15:07:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.800 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.058 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.314 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.571 15:07:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.828 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:08.087 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:08.345 malloc_lvol_verify 00:13:08.345 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:08.602 e1ec0f2c-21d1-4f69-97ed-c7be51d7857f 00:13:08.602 15:07:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:08.861 04ca7293-3c12-4a8d-a4ac-91a89155d77b 00:13:08.861 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:08.861 /dev/nbd0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:09.119 Discarding device blocks: 0/1024 done 00:13:09.119 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:09.119 00:13:09.119 Allocating group tables: 0/1 done 00:13:09.119 Writing inode tables: 0/1 done 00:13:09.119 mke2fs 1.47.0 (5-Feb-2023) 00:13:09.119 00:13:09.119 Filesystem too small for a journal 00:13:09.119 Writing superblocks and filesystem accounting information: 0/1 done 00:13:09.119 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 84559 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 84559 ']' 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 84559 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84559 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:09.119 killing process with pid 84559 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84559' 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 84559 00:13:09.119 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 84559 00:13:09.689 15:07:04 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:09.689 00:13:09.689 real 0m21.729s 00:13:09.689 user 0m28.204s 00:13:09.689 sys 0m10.708s 00:13:09.689 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:09.689 15:07:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:09.689 ************************************ 00:13:09.689 END TEST bdev_nbd 00:13:09.689 ************************************ 00:13:09.689 15:07:04 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:09.689 15:07:04 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:09.689 15:07:04 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:13:09.689 15:07:04 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:13:09.689 15:07:04 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:09.689 15:07:04 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:09.689 15:07:04 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.689 15:07:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:09.689 ************************************ 00:13:09.689 START TEST bdev_fio 00:13:09.689 ************************************ 00:13:09.689 15:07:04 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:13:09.689 15:07:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:09.689 15:07:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:09.689 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:09.689 15:07:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:09.689 15:07:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:09.689 15:07:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.689 15:07:05 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:09.689 ************************************ 00:13:09.689 START TEST bdev_fio_rw_verify 00:13:09.689 ************************************ 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.689 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:09.948 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:09.948 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:13:09.948 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:13:09.948 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:09.948 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:09.948 15:07:05 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.948 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:09.948 fio-3.35 00:13:09.948 Starting 16 threads 00:13:22.154 00:13:22.154 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=85645: Tue Jul 23 15:07:16 2024 00:13:22.154 read: IOPS=82.7k, BW=323MiB/s (339MB/s)(3230MiB/10003msec) 00:13:22.154 slat (usec): min=2, max=14050, avg=35.91, stdev=238.93 00:13:22.154 clat (usec): min=10, max=14309, avg=282.69, stdev=680.50 00:13:22.154 lat (usec): min=24, max=14334, avg=318.60, stdev=719.29 00:13:22.154 clat percentiles (usec): 00:13:22.154 | 50.000th=[ 172], 99.000th=[ 4228], 99.900th=[ 7242], 99.990th=[10159], 00:13:22.154 | 99.999th=[12256] 00:13:22.154 write: IOPS=130k, BW=508MiB/s (532MB/s)(5018MiB/9882msec); 0 zone resets 00:13:22.154 slat (usec): min=6, max=22791, avg=58.18, stdev=306.78 00:13:22.154 clat (usec): min=10, max=23100, avg=359.02, stdev=770.69 00:13:22.154 lat (usec): min=39, max=23130, avg=417.20, stdev=826.31 00:13:22.154 clat percentiles (usec): 00:13:22.154 | 50.000th=[ 217], 99.000th=[ 4293], 99.900th=[ 7308], 99.990th=[11207], 00:13:22.154 | 99.999th=[16188] 00:13:22.154 bw ( KiB/s): min=376191, max=788827, per=98.88%, avg=514156.68, stdev=7671.21, samples=304 00:13:22.154 iops : min=94047, max=197206, avg=128538.63, stdev=1917.79, samples=304 00:13:22.154 lat (usec) : 20=0.01%, 50=0.31%, 100=12.13%, 250=57.37%, 500=25.87% 00:13:22.154 lat (usec) : 750=1.03%, 1000=0.09% 00:13:22.154 lat (msec) : 2=0.10%, 4=1.04%, 10=2.03%, 20=0.02%, 50=0.01% 00:13:22.154 cpu : usr=57.45%, sys=2.91%, ctx=275177, majf=0, minf=116602 00:13:22.154 IO depths : 1=11.1%, 2=24.0%, 4=51.9%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.154 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.154 issued rwts: total=826790,1284671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:22.154 00:13:22.154 Run status group 0 (all jobs): 00:13:22.154 READ: bw=323MiB/s (339MB/s), 323MiB/s-323MiB/s (339MB/s-339MB/s), io=3230MiB (3387MB), run=10003-10003msec 00:13:22.154 WRITE: bw=508MiB/s (532MB/s), 508MiB/s-508MiB/s (532MB/s-532MB/s), io=5018MiB (5262MB), run=9882-9882msec 00:13:22.154 ----------------------------------------------------- 00:13:22.154 Suppressions used: 00:13:22.154 count bytes template 00:13:22.154 16 140 /usr/src/fio/parse.c 00:13:22.154 10862 1042752 /usr/src/fio/iolog.c 00:13:22.154 1 443 fio_memalign 00:13:22.154 1 16 spdk_fio_io_u_init 00:13:22.154 1 904 libcrypto.so 00:13:22.154 ----------------------------------------------------- 00:13:22.154 00:13:22.154 00:13:22.154 real 0m11.807s 00:13:22.154 user 1m33.999s 00:13:22.154 sys 0m6.055s 00:13:22.154 ************************************ 00:13:22.154 END TEST bdev_fio_rw_verify 00:13:22.154 ************************************ 00:13:22.154 15:07:16 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.154 15:07:16 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:22.154 15:07:16 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:22.156 15:07:16 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6ed08f83-2e58-423b-a676-158b40f961f8"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6ed08f83-2e58-423b-a676-158b40f961f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "215a85f1-74fc-5ad9-a777-d05e0fdd2f8b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "215a85f1-74fc-5ad9-a777-d05e0fdd2f8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "97d9bb37-1f40-5356-a537-b4a01220a3ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "97d9bb37-1f40-5356-a537-b4a01220a3ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "316e264f-b67a-5baa-9965-265906faddf6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "316e264f-b67a-5baa-9965-265906faddf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d3133654-23c3-5255-bf7b-0995d3031b0f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d3133654-23c3-5255-bf7b-0995d3031b0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "4ad9760a-6d3f-57e4-80b8-2edcd71783f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4ad9760a-6d3f-57e4-80b8-2edcd71783f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7faed77-fb3a-541e-841f-a35ca1ac388f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7faed77-fb3a-541e-841f-a35ca1ac388f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "0fcd7496-ac16-57d8-96c1-6e9697dc2882"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0fcd7496-ac16-57d8-96c1-6e9697dc2882",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8950dae5-cf9f-5268-8947-c59b0cfc1602"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8950dae5-cf9f-5268-8947-c59b0cfc1602",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "3eeec138-67da-5ee0-8c4e-35fe91dd29fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3eeec138-67da-5ee0-8c4e-35fe91dd29fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "9ae30072-323c-51d1-8ba5-0ae4dedb7b26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ae30072-323c-51d1-8ba5-0ae4dedb7b26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5c754d55-26fa-5e37-abcb-304690d3090d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5c754d55-26fa-5e37-abcb-304690d3090d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "43c1d862-664d-478d-87bc-76b4efdda2d1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "43c1d862-664d-478d-87bc-76b4efdda2d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "43c1d862-664d-478d-87bc-76b4efdda2d1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a92df594-793b-4cfa-9273-d36441be2288",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "666e2b5c-aac5-42ce-9325-04ccfe8b4f29",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f0c167e5-fa78-474a-8a36-343d929d9bdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ac2d10b2-5f05-492c-9b54-705b025ed513",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b450309b-2582-474d-8132-0ef02c8fa549"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b450309b-2582-474d-8132-0ef02c8fa549",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b450309b-2582-474d-8132-0ef02c8fa549",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d7de5adc-3867-4fd1-9665-34d465fcc610",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b2a403d6-81b2-486c-a6dd-d478093bba12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a46a4e7c-37b0-4dca-bfc8-85c0eb534458"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a46a4e7c-37b0-4dca-bfc8-85c0eb534458",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:22.156 15:07:16 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:13:22.156 Malloc1p0 00:13:22.156 Malloc1p1 00:13:22.156 Malloc2p0 00:13:22.156 Malloc2p1 00:13:22.156 Malloc2p2 00:13:22.156 Malloc2p3 00:13:22.156 Malloc2p4 00:13:22.156 Malloc2p5 00:13:22.156 Malloc2p6 00:13:22.156 Malloc2p7 00:13:22.156 TestPT 00:13:22.156 raid0 00:13:22.156 concat0 ]] 00:13:22.157 15:07:16 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6ed08f83-2e58-423b-a676-158b40f961f8"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6ed08f83-2e58-423b-a676-158b40f961f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "215a85f1-74fc-5ad9-a777-d05e0fdd2f8b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "215a85f1-74fc-5ad9-a777-d05e0fdd2f8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "97d9bb37-1f40-5356-a537-b4a01220a3ed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "97d9bb37-1f40-5356-a537-b4a01220a3ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "316e264f-b67a-5baa-9965-265906faddf6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "316e264f-b67a-5baa-9965-265906faddf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d3133654-23c3-5255-bf7b-0995d3031b0f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d3133654-23c3-5255-bf7b-0995d3031b0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "4ad9760a-6d3f-57e4-80b8-2edcd71783f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4ad9760a-6d3f-57e4-80b8-2edcd71783f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7faed77-fb3a-541e-841f-a35ca1ac388f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7faed77-fb3a-541e-841f-a35ca1ac388f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "0fcd7496-ac16-57d8-96c1-6e9697dc2882"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0fcd7496-ac16-57d8-96c1-6e9697dc2882",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8950dae5-cf9f-5268-8947-c59b0cfc1602"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8950dae5-cf9f-5268-8947-c59b0cfc1602",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "3eeec138-67da-5ee0-8c4e-35fe91dd29fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3eeec138-67da-5ee0-8c4e-35fe91dd29fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "9ae30072-323c-51d1-8ba5-0ae4dedb7b26"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ae30072-323c-51d1-8ba5-0ae4dedb7b26",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5c754d55-26fa-5e37-abcb-304690d3090d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5c754d55-26fa-5e37-abcb-304690d3090d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "43c1d862-664d-478d-87bc-76b4efdda2d1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "43c1d862-664d-478d-87bc-76b4efdda2d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "43c1d862-664d-478d-87bc-76b4efdda2d1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a92df594-793b-4cfa-9273-d36441be2288",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "666e2b5c-aac5-42ce-9325-04ccfe8b4f29",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "87cd38ce-d4fa-4920-8fc2-4d38e647fcb0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f0c167e5-fa78-474a-8a36-343d929d9bdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ac2d10b2-5f05-492c-9b54-705b025ed513",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b450309b-2582-474d-8132-0ef02c8fa549"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b450309b-2582-474d-8132-0ef02c8fa549",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b450309b-2582-474d-8132-0ef02c8fa549",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d7de5adc-3867-4fd1-9665-34d465fcc610",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b2a403d6-81b2-486c-a6dd-d478093bba12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a46a4e7c-37b0-4dca-bfc8-85c0eb534458"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a46a4e7c-37b0-4dca-bfc8-85c0eb534458",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:22.157 15:07:16 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:13:22.157 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.158 15:07:17 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:22.158 ************************************ 00:13:22.158 START TEST bdev_fio_trim 00:13:22.158 ************************************ 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:22.158 15:07:17 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.158 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.158 fio-3.35 00:13:22.158 Starting 14 threads 00:13:34.361 00:13:34.361 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=85822: Tue Jul 23 15:07:27 2024 00:13:34.361 write: IOPS=165k, BW=645MiB/s (677MB/s)(6455MiB/10003msec); 0 zone resets 00:13:34.361 slat (usec): min=2, max=10050, avg=29.90, stdev=187.19 00:13:34.361 clat (usec): min=26, max=12402, avg=215.89, stdev=520.35 00:13:34.361 lat (usec): min=37, max=12406, avg=245.79, stdev=551.97 00:13:34.361 clat percentiles (usec): 00:13:34.361 | 50.000th=[ 145], 99.000th=[ 4146], 99.900th=[ 6128], 99.990th=[ 7373], 00:13:34.361 | 99.999th=[10290] 00:13:34.361 bw ( KiB/s): min=433838, max=862496, per=100.00%, avg=660933.63, stdev=9933.19, samples=266 00:13:34.361 iops : min=108458, max=215624, avg=165233.16, stdev=2483.32, samples=266 00:13:34.361 trim: IOPS=165k, BW=645MiB/s (677MB/s)(6455MiB/10003msec); 0 zone resets 00:13:34.361 slat (usec): min=4, max=13045, avg=21.38, stdev=162.35 00:13:34.361 clat (usec): min=4, max=12296, avg=231.21, stdev=523.98 00:13:34.361 lat (usec): min=14, max=13316, avg=252.60, stdev=548.09 00:13:34.361 clat percentiles (usec): 00:13:34.361 | 50.000th=[ 163], 99.000th=[ 4146], 99.900th=[ 6128], 99.990th=[ 7242], 00:13:34.361 | 99.999th=[10028] 00:13:34.361 bw ( KiB/s): min=433902, max=862560, per=100.00%, avg=660934.05, stdev=9933.17, samples=266 00:13:34.361 iops : min=108474, max=215640, avg=165233.26, stdev=2483.31, samples=266 00:13:34.361 lat (usec) : 10=0.06%, 20=0.19%, 50=0.84%, 100=14.27%, 250=78.64% 00:13:34.361 lat (usec) : 500=4.10%, 750=0.19%, 1000=0.02% 00:13:34.361 lat (msec) : 2=0.03%, 4=0.41%, 10=1.24%, 20=0.01% 00:13:34.361 cpu : usr=69.36%, sys=0.23%, ctx=150985, majf=0, minf=24000 00:13:34.361 IO depths : 1=12.4%, 2=24.7%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.361 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.361 issued rwts: total=0,1652503,1652508,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.361 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:34.361 00:13:34.361 Run status group 0 (all jobs): 00:13:34.361 WRITE: bw=645MiB/s (677MB/s), 645MiB/s-645MiB/s (677MB/s-677MB/s), io=6455MiB (6769MB), run=10003-10003msec 00:13:34.361 TRIM: bw=645MiB/s (677MB/s), 645MiB/s-645MiB/s (677MB/s-677MB/s), io=6455MiB (6769MB), run=10003-10003msec 00:13:34.361 ----------------------------------------------------- 00:13:34.361 Suppressions used: 00:13:34.361 count bytes template 00:13:34.361 14 129 /usr/src/fio/parse.c 00:13:34.361 1 904 libcrypto.so 00:13:34.361 ----------------------------------------------------- 00:13:34.361 00:13:34.361 00:13:34.361 real 0m11.496s 00:13:34.361 user 1m39.593s 00:13:34.361 sys 0m0.970s 00:13:34.361 15:07:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:34.361 15:07:28 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:13:34.361 ************************************ 00:13:34.361 END TEST bdev_fio_trim 00:13:34.361 ************************************ 00:13:34.361 15:07:28 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:13:34.361 15:07:28 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:13:34.361 15:07:28 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:34.361 15:07:28 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:13:34.361 /home/vagrant/spdk_repo/spdk 00:13:34.361 15:07:28 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:13:34.361 00:13:34.361 real 0m23.580s 00:13:34.361 user 3m13.693s 00:13:34.361 sys 0m7.186s 00:13:34.361 15:07:28 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:34.361 ************************************ 00:13:34.361 END TEST bdev_fio 00:13:34.361 15:07:28 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:34.361 ************************************ 00:13:34.361 15:07:28 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:34.361 15:07:28 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:34.361 15:07:28 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:34.361 15:07:28 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:13:34.361 15:07:28 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.361 15:07:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:34.361 ************************************ 00:13:34.361 START TEST bdev_verify 00:13:34.361 ************************************ 00:13:34.361 15:07:28 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:34.361 [2024-07-23 15:07:28.698103] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:13:34.361 [2024-07-23 15:07:28.698263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85989 ] 00:13:34.361 [2024-07-23 15:07:28.835328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.361 [2024-07-23 15:07:28.881683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.361 [2024-07-23 15:07:28.881849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.361 [2024-07-23 15:07:29.008856] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:34.361 [2024-07-23 15:07:29.008950] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:34.361 [2024-07-23 15:07:29.016808] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:34.361 [2024-07-23 15:07:29.016853] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:34.361 [2024-07-23 15:07:29.024847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:34.361 [2024-07-23 15:07:29.024891] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:34.361 [2024-07-23 15:07:29.024905] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:34.361 [2024-07-23 15:07:29.108865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:34.361 [2024-07-23 15:07:29.108941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.361 [2024-07-23 15:07:29.108970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:13:34.361 [2024-07-23 15:07:29.108984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.361 [2024-07-23 15:07:29.111562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.361 [2024-07-23 15:07:29.111602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:34.361 Running I/O for 5 seconds... 00:13:39.633 00:13:39.633 Latency(us) 00:13:39.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.633 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.633 Verification LBA range: start 0x0 length 0x1000 00:13:39.634 Malloc0 : 5.17 1412.00 5.52 0.00 0.00 90501.39 573.44 305585.01 00:13:39.634 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x1000 length 0x1000 00:13:39.634 Malloc0 : 5.06 1417.03 5.54 0.00 0.00 90181.07 534.43 305585.01 00:13:39.634 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x800 00:13:39.634 Malloc1p0 : 5.21 736.35 2.88 0.00 0.00 173126.16 3011.54 165774.87 00:13:39.634 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x800 length 0x800 00:13:39.634 Malloc1p0 : 5.22 735.83 2.87 0.00 0.00 173229.10 3011.54 167772.16 00:13:39.634 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x800 00:13:39.634 Malloc1p1 : 5.22 736.08 2.88 0.00 0.00 172814.99 2980.33 162778.94 00:13:39.634 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x800 length 0x800 00:13:39.634 Malloc1p1 : 5.22 735.47 2.87 0.00 0.00 172931.96 3027.14 163777.58 00:13:39.634 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x200 00:13:39.634 Malloc2p0 : 5.22 735.75 2.87 0.00 0.00 172534.11 2871.10 161780.30 00:13:39.634 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x200 length 0x200 00:13:39.634 Malloc2p0 : 5.22 735.14 2.87 0.00 0.00 172632.04 2886.70 162778.94 00:13:39.634 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x200 00:13:39.634 Malloc2p1 : 5.22 735.41 2.87 0.00 0.00 172281.68 2605.84 160781.65 00:13:39.634 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x200 length 0x200 00:13:39.634 Malloc2p1 : 5.23 734.82 2.87 0.00 0.00 172374.37 2621.44 161780.30 00:13:39.634 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x200 00:13:39.634 Malloc2p2 : 5.22 735.07 2.87 0.00 0.00 172044.33 2574.63 157785.72 00:13:39.634 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x200 length 0x200 00:13:39.634 Malloc2p2 : 5.23 734.51 2.87 0.00 0.00 172123.89 2590.23 158784.37 00:13:39.634 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x200 00:13:39.634 Malloc2p3 : 5.23 734.74 2.87 0.00 0.00 171782.50 2590.23 155788.43 00:13:39.634 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x200 length 0x200 00:13:39.634 Malloc2p3 : 5.23 734.21 2.87 0.00 0.00 171844.49 2574.63 155788.43 00:13:39.634 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x200 00:13:39.634 Malloc2p4 : 5.23 734.43 2.87 0.00 0.00 171508.65 2668.25 152792.50 00:13:39.634 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x200 length 0x200 00:13:39.634 Malloc2p4 : 5.23 733.92 2.87 0.00 0.00 171574.58 2637.04 153791.15 00:13:39.634 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x200 00:13:39.634 Malloc2p5 : 5.23 734.13 2.87 0.00 0.00 171203.47 2746.27 149796.57 00:13:39.634 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x200 length 0x200 00:13:39.634 Malloc2p5 : 5.23 733.63 2.87 0.00 0.00 171268.82 2715.06 151793.86 00:13:39.634 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x200 00:13:39.634 Malloc2p6 : 5.23 733.84 2.87 0.00 0.00 170901.33 2699.46 147799.28 00:13:39.634 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x200 length 0x200 00:13:39.634 Malloc2p6 : 5.24 733.36 2.86 0.00 0.00 170982.12 2715.06 148797.93 00:13:39.634 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x200 00:13:39.634 Malloc2p7 : 5.23 733.55 2.87 0.00 0.00 170609.01 2637.04 143804.71 00:13:39.634 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x200 length 0x200 00:13:39.634 Malloc2p7 : 5.24 732.91 2.86 0.00 0.00 170701.00 2637.04 144803.35 00:13:39.634 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x1000 00:13:39.634 TestPT : 5.24 713.33 2.79 0.00 0.00 174011.97 13481.69 143804.71 00:13:39.634 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x1000 length 0x1000 00:13:39.634 TestPT : 5.24 708.82 2.77 0.00 0.00 174860.54 15666.22 144803.35 00:13:39.634 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x2000 00:13:39.634 raid0 : 5.24 732.93 2.86 0.00 0.00 169806.32 3042.74 127826.41 00:13:39.634 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x2000 length 0x2000 00:13:39.634 raid0 : 5.24 732.43 2.86 0.00 0.00 169919.48 3058.35 128825.05 00:13:39.634 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x2000 00:13:39.634 concat0 : 5.24 732.58 2.86 0.00 0.00 169527.31 3105.16 123332.51 00:13:39.634 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x2000 length 0x2000 00:13:39.634 concat0 : 5.24 732.19 2.86 0.00 0.00 169593.49 3058.35 123831.83 00:13:39.634 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x1000 00:13:39.634 raid1 : 5.24 732.32 2.86 0.00 0.00 169179.91 3713.71 126328.44 00:13:39.634 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x1000 length 0x1000 00:13:39.634 raid1 : 5.25 731.96 2.86 0.00 0.00 169247.18 3682.50 126827.76 00:13:39.634 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x0 length 0x4e2 00:13:39.634 AIO0 : 5.25 732.08 2.86 0.00 0.00 168823.62 706.07 132819.63 00:13:39.634 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.634 Verification LBA range: start 0x4e2 length 0x4e2 00:13:39.634 AIO0 : 5.25 731.73 2.86 0.00 0.00 168875.77 780.19 133818.27 00:13:39.634 =================================================================================================================== 00:13:39.634 Total : 24802.52 96.88 0.00 0.00 162345.86 534.43 305585.01 00:13:39.634 00:13:39.634 real 0m6.363s 00:13:39.634 user 0m11.709s 00:13:39.634 sys 0m0.452s 00:13:39.634 15:07:34 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.634 15:07:34 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:39.634 ************************************ 00:13:39.634 END TEST bdev_verify 00:13:39.634 ************************************ 00:13:39.634 15:07:35 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:39.634 15:07:35 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:39.634 15:07:35 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:13:39.634 15:07:35 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.634 15:07:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:39.893 ************************************ 00:13:39.893 START TEST bdev_verify_big_io 00:13:39.893 ************************************ 00:13:39.893 15:07:35 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:39.893 [2024-07-23 15:07:35.119038] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:13:39.893 [2024-07-23 15:07:35.119236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86071 ] 00:13:39.893 [2024-07-23 15:07:35.261205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.893 [2024-07-23 15:07:35.310923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.893 [2024-07-23 15:07:35.311035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.152 [2024-07-23 15:07:35.437908] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:40.152 [2024-07-23 15:07:35.437991] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:40.152 [2024-07-23 15:07:35.445864] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:40.152 [2024-07-23 15:07:35.445910] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:40.152 [2024-07-23 15:07:35.453911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:40.152 [2024-07-23 15:07:35.453962] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:40.152 [2024-07-23 15:07:35.453978] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:40.152 [2024-07-23 15:07:35.537072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:40.152 [2024-07-23 15:07:35.537156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.152 [2024-07-23 15:07:35.537176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:13:40.152 [2024-07-23 15:07:35.537188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.152 [2024-07-23 15:07:35.540061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.152 [2024-07-23 15:07:35.540110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:40.412 [2024-07-23 15:07:35.692105] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.693042] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.694299] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.695657] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.696511] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.697830] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.698649] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.700034] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.700903] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.702201] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.703064] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.704413] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.705270] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.706583] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.707949] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.708830] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:40.412 [2024-07-23 15:07:35.732056] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:40.412 [2024-07-23 15:07:35.734023] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:40.412 Running I/O for 5 seconds... 00:13:46.979 00:13:46.979 Latency(us) 00:13:46.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.979 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x0 length 0x100 00:13:46.979 Malloc0 : 5.53 277.92 17.37 0.00 0.00 453373.23 659.26 1326198.98 00:13:46.979 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x100 length 0x100 00:13:46.979 Malloc0 : 5.61 251.06 15.69 0.00 0.00 502500.32 667.06 1541906.04 00:13:46.979 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x0 length 0x80 00:13:46.979 Malloc1p0 : 6.17 49.25 3.08 0.00 0.00 2397616.53 1256.11 3882727.13 00:13:46.979 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x80 length 0x80 00:13:46.979 Malloc1p0 : 5.85 134.77 8.42 0.00 0.00 893685.79 2231.34 1805548.01 00:13:46.979 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x0 length 0x80 00:13:46.979 Malloc1p1 : 6.17 49.24 3.08 0.00 0.00 2340139.15 1248.30 3754900.72 00:13:46.979 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x80 length 0x80 00:13:46.979 Malloc1p1 : 6.11 49.78 3.11 0.00 0.00 2353145.93 1271.71 3802835.63 00:13:46.979 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x0 length 0x20 00:13:46.979 Malloc2p0 : 5.82 38.47 2.40 0.00 0.00 760774.30 581.24 1438047.09 00:13:46.979 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x20 length 0x20 00:13:46.979 Malloc2p0 : 5.79 35.91 2.24 0.00 0.00 812276.66 577.34 1334188.13 00:13:46.979 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x0 length 0x20 00:13:46.979 Malloc2p1 : 5.82 38.46 2.40 0.00 0.00 756047.63 596.85 1414079.63 00:13:46.979 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x20 length 0x20 00:13:46.979 Malloc2p1 : 5.85 38.30 2.39 0.00 0.00 766653.22 569.54 1318209.83 00:13:46.979 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x0 length 0x20 00:13:46.979 Malloc2p2 : 5.83 38.45 2.40 0.00 0.00 750895.33 616.35 1398101.33 00:13:46.979 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x20 length 0x20 00:13:46.979 Malloc2p2 : 5.85 38.29 2.39 0.00 0.00 762011.58 585.14 1302231.53 00:13:46.979 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x0 length 0x20 00:13:46.979 Malloc2p3 : 5.83 38.45 2.40 0.00 0.00 745963.62 577.34 1374133.88 00:13:46.979 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x20 length 0x20 00:13:46.979 Malloc2p3 : 5.85 38.28 2.39 0.00 0.00 757156.06 573.44 1286253.23 00:13:46.979 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x0 length 0x20 00:13:46.979 Malloc2p4 : 5.83 38.44 2.40 0.00 0.00 741329.16 581.24 1358155.58 00:13:46.979 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.979 Verification LBA range: start 0x20 length 0x20 00:13:46.980 Malloc2p4 : 5.85 38.27 2.39 0.00 0.00 752570.09 585.14 1270274.93 00:13:46.980 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x0 length 0x20 00:13:46.980 Malloc2p5 : 5.83 38.43 2.40 0.00 0.00 736515.33 717.78 1342177.28 00:13:46.980 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x20 length 0x20 00:13:46.980 Malloc2p5 : 5.85 38.27 2.39 0.00 0.00 748235.83 585.14 1254296.62 00:13:46.980 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x0 length 0x20 00:13:46.980 Malloc2p6 : 5.89 40.75 2.55 0.00 0.00 693654.30 620.25 1326198.98 00:13:46.980 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x20 length 0x20 00:13:46.980 Malloc2p6 : 5.86 38.26 2.39 0.00 0.00 743677.50 698.27 1238318.32 00:13:46.980 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x0 length 0x20 00:13:46.980 Malloc2p7 : 5.89 40.74 2.55 0.00 0.00 689072.09 569.54 1302231.53 00:13:46.980 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x20 length 0x20 00:13:46.980 Malloc2p7 : 5.86 38.25 2.39 0.00 0.00 739111.28 631.95 1214350.87 00:13:46.980 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x0 length 0x100 00:13:46.980 TestPT : 6.30 53.31 3.33 0.00 0.00 2019717.39 1271.71 3483269.61 00:13:46.980 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x100 length 0x100 00:13:46.980 TestPT : 6.14 47.19 2.95 0.00 0.00 2322738.75 73899.64 3275551.70 00:13:46.980 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x0 length 0x200 00:13:46.980 raid0 : 6.27 58.73 3.67 0.00 0.00 1811244.64 1357.53 3371421.50 00:13:46.980 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x200 length 0x200 00:13:46.980 raid0 : 6.18 54.35 3.40 0.00 0.00 1973395.25 1396.54 3435334.70 00:13:46.980 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x0 length 0x200 00:13:46.980 concat0 : 6.11 77.40 4.84 0.00 0.00 1364846.49 1373.14 3243595.09 00:13:46.980 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x200 length 0x200 00:13:46.980 concat0 : 6.11 73.30 4.58 0.00 0.00 1453123.06 1357.53 3307508.30 00:13:46.980 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x0 length 0x100 00:13:46.980 raid1 : 6.28 78.31 4.89 0.00 0.00 1323694.56 1708.62 3115768.69 00:13:46.980 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x100 length 0x100 00:13:46.980 raid1 : 6.21 64.38 4.02 0.00 0.00 1605140.84 1732.02 3195660.19 00:13:46.980 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x0 length 0x4e 00:13:46.980 AIO0 : 6.29 73.18 4.57 0.00 0.00 845511.60 979.14 1909406.96 00:13:46.980 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:46.980 Verification LBA range: start 0x4e length 0x4e 00:13:46.980 AIO0 : 6.26 83.35 5.21 0.00 0.00 750441.88 1373.14 1909406.96 00:13:46.980 =================================================================================================================== 00:13:46.980 Total : 2091.56 130.72 0.00 0.00 1051288.59 569.54 3882727.13 00:13:47.239 00:13:47.239 real 0m7.462s 00:13:47.239 user 0m13.971s 00:13:47.239 sys 0m0.406s 00:13:47.239 15:07:42 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.239 ************************************ 00:13:47.239 END TEST bdev_verify_big_io 00:13:47.239 ************************************ 00:13:47.239 15:07:42 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.239 15:07:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:47.239 15:07:42 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:47.239 15:07:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:47.239 15:07:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.239 15:07:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:47.239 ************************************ 00:13:47.239 START TEST bdev_write_zeroes 00:13:47.239 ************************************ 00:13:47.239 15:07:42 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:47.239 [2024-07-23 15:07:42.657492] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:13:47.239 [2024-07-23 15:07:42.657697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86169 ] 00:13:47.498 [2024-07-23 15:07:42.814119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.498 [2024-07-23 15:07:42.870477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.758 [2024-07-23 15:07:43.010061] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.758 [2024-07-23 15:07:43.010168] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.758 [2024-07-23 15:07:43.018010] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.758 [2024-07-23 15:07:43.018078] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.758 [2024-07-23 15:07:43.026032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.758 [2024-07-23 15:07:43.026093] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:47.758 [2024-07-23 15:07:43.026125] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:47.758 [2024-07-23 15:07:43.113338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.758 [2024-07-23 15:07:43.113415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.758 [2024-07-23 15:07:43.113435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:13:47.758 [2024-07-23 15:07:43.113447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.758 [2024-07-23 15:07:43.115965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.758 [2024-07-23 15:07:43.116005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:48.017 Running I/O for 1 seconds... 00:13:48.954 00:13:48.954 Latency(us) 00:13:48.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.954 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.954 Malloc0 : 1.03 6064.22 23.69 0.00 0.00 21092.25 585.14 36450.50 00:13:48.954 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.954 Malloc1p0 : 1.04 6057.54 23.66 0.00 0.00 21083.01 799.70 35701.52 00:13:48.954 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.954 Malloc1p1 : 1.04 6051.10 23.64 0.00 0.00 21075.74 776.29 34952.53 00:13:48.954 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.954 Malloc2p0 : 1.04 6044.74 23.61 0.00 0.00 21055.90 772.39 34203.55 00:13:48.954 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.954 Malloc2p1 : 1.04 6038.44 23.59 0.00 0.00 21040.39 752.88 33454.57 00:13:48.955 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 Malloc2p2 : 1.04 6032.06 23.56 0.00 0.00 21017.07 768.49 32705.58 00:13:48.955 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 Malloc2p3 : 1.04 6025.64 23.54 0.00 0.00 21001.43 764.59 31831.77 00:13:48.955 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 Malloc2p4 : 1.04 6019.33 23.51 0.00 0.00 20981.64 776.29 31082.79 00:13:48.955 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 Malloc2p5 : 1.04 6013.05 23.49 0.00 0.00 20970.71 760.69 30333.81 00:13:48.955 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 Malloc2p6 : 1.04 6006.75 23.46 0.00 0.00 20956.42 768.49 29584.82 00:13:48.955 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 Malloc2p7 : 1.05 6000.61 23.44 0.00 0.00 20937.38 768.49 28835.84 00:13:48.955 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 TestPT : 1.05 5994.28 23.42 0.00 0.00 20926.51 795.79 27962.03 00:13:48.955 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 raid0 : 1.05 5986.95 23.39 0.00 0.00 20901.34 1334.13 26713.72 00:13:48.955 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 concat0 : 1.05 5979.91 23.36 0.00 0.00 20861.10 1326.32 25340.59 00:13:48.955 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 raid1 : 1.05 5970.92 23.32 0.00 0.00 20810.44 2153.33 23093.64 00:13:48.955 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:48.955 AIO0 : 1.05 6066.57 23.70 0.00 0.00 20399.99 483.72 23218.47 00:13:48.955 =================================================================================================================== 00:13:48.955 Total : 96352.11 376.38 0.00 0.00 20943.82 483.72 36450.50 00:13:49.561 00:13:49.561 real 0m2.165s 00:13:49.561 user 0m1.644s 00:13:49.561 sys 0m0.370s 00:13:49.561 15:07:44 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.561 ************************************ 00:13:49.561 END TEST bdev_write_zeroes 00:13:49.561 15:07:44 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:49.561 ************************************ 00:13:49.561 15:07:44 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:49.561 15:07:44 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:49.561 15:07:44 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:49.561 15:07:44 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.561 15:07:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:49.561 ************************************ 00:13:49.561 START TEST bdev_json_nonenclosed 00:13:49.561 ************************************ 00:13:49.561 15:07:44 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:49.561 [2024-07-23 15:07:44.878286] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:13:49.561 [2024-07-23 15:07:44.878488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86217 ] 00:13:49.820 [2024-07-23 15:07:45.034741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.820 [2024-07-23 15:07:45.087257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.820 [2024-07-23 15:07:45.087380] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:49.820 [2024-07-23 15:07:45.087425] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:49.820 [2024-07-23 15:07:45.087456] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:49.820 00:13:49.820 real 0m0.423s 00:13:49.820 user 0m0.191s 00:13:49.820 sys 0m0.132s 00:13:49.820 15:07:45 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:13:49.820 ************************************ 00:13:49.820 END TEST bdev_json_nonenclosed 00:13:49.820 ************************************ 00:13:49.820 15:07:45 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.820 15:07:45 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:50.079 15:07:45 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:13:50.079 15:07:45 blockdev_general -- bdev/blockdev.sh@781 -- # true 00:13:50.079 15:07:45 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.079 15:07:45 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:50.079 15:07:45 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.079 15:07:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:50.079 ************************************ 00:13:50.079 START TEST bdev_json_nonarray 00:13:50.079 ************************************ 00:13:50.079 15:07:45 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.079 [2024-07-23 15:07:45.358395] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:13:50.079 [2024-07-23 15:07:45.358587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86242 ] 00:13:50.337 [2024-07-23 15:07:45.514609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.337 [2024-07-23 15:07:45.567374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.337 [2024-07-23 15:07:45.567501] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:50.337 [2024-07-23 15:07:45.567551] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:50.337 [2024-07-23 15:07:45.567571] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:50.337 00:13:50.337 real 0m0.413s 00:13:50.337 user 0m0.182s 00:13:50.337 sys 0m0.130s 00:13:50.337 15:07:45 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:13:50.337 15:07:45 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.337 15:07:45 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:50.337 ************************************ 00:13:50.337 END TEST bdev_json_nonarray 00:13:50.337 ************************************ 00:13:50.337 15:07:45 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:13:50.337 15:07:45 blockdev_general -- bdev/blockdev.sh@784 -- # true 00:13:50.337 15:07:45 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:13:50.337 15:07:45 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:13:50.337 15:07:45 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.337 15:07:45 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.337 15:07:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:50.597 ************************************ 00:13:50.597 START TEST bdev_qos 00:13:50.597 ************************************ 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=86266 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 86266' 00:13:50.597 Process qos testing pid: 86266 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 86266 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 86266 ']' 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.597 15:07:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:50.597 [2024-07-23 15:07:45.837026] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:13:50.597 [2024-07-23 15:07:45.837218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86266 ] 00:13:50.597 [2024-07-23 15:07:45.989971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.856 [2024-07-23 15:07:46.046626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:51.424 Malloc_0 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.424 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:51.424 [ 00:13:51.424 { 00:13:51.424 "name": "Malloc_0", 00:13:51.424 "aliases": [ 00:13:51.424 "1fea0011-0aee-42e2-90ac-14a34aef1b43" 00:13:51.424 ], 00:13:51.424 "product_name": "Malloc disk", 00:13:51.424 "block_size": 512, 00:13:51.424 "num_blocks": 262144, 00:13:51.424 "uuid": "1fea0011-0aee-42e2-90ac-14a34aef1b43", 00:13:51.424 "assigned_rate_limits": { 00:13:51.424 "rw_ios_per_sec": 0, 00:13:51.424 "rw_mbytes_per_sec": 0, 00:13:51.424 "r_mbytes_per_sec": 0, 00:13:51.424 "w_mbytes_per_sec": 0 00:13:51.424 }, 00:13:51.424 "claimed": false, 00:13:51.424 "zoned": false, 00:13:51.424 "supported_io_types": { 00:13:51.424 "read": true, 00:13:51.424 "write": true, 00:13:51.424 "unmap": true, 00:13:51.424 "flush": true, 00:13:51.424 "reset": true, 00:13:51.424 "nvme_admin": false, 00:13:51.424 "nvme_io": false, 00:13:51.424 "nvme_io_md": false, 00:13:51.424 "write_zeroes": true, 00:13:51.424 "zcopy": true, 00:13:51.424 "get_zone_info": false, 00:13:51.424 "zone_management": false, 00:13:51.424 "zone_append": false, 00:13:51.424 "compare": false, 00:13:51.424 "compare_and_write": false, 00:13:51.424 "abort": true, 00:13:51.424 "seek_hole": false, 00:13:51.424 "seek_data": false, 00:13:51.424 "copy": true, 00:13:51.424 "nvme_iov_md": false 00:13:51.424 }, 00:13:51.424 "memory_domains": [ 00:13:51.424 { 00:13:51.424 "dma_device_id": "system", 00:13:51.424 "dma_device_type": 1 00:13:51.424 }, 00:13:51.684 { 00:13:51.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.684 "dma_device_type": 2 00:13:51.684 } 00:13:51.684 ], 00:13:51.684 "driver_specific": {} 00:13:51.684 } 00:13:51.684 ] 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:51.684 Null_1 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:51.684 [ 00:13:51.684 { 00:13:51.684 "name": "Null_1", 00:13:51.684 "aliases": [ 00:13:51.684 "67e85e35-b8e8-48d0-839d-5743b828b003" 00:13:51.684 ], 00:13:51.684 "product_name": "Null disk", 00:13:51.684 "block_size": 512, 00:13:51.684 "num_blocks": 262144, 00:13:51.684 "uuid": "67e85e35-b8e8-48d0-839d-5743b828b003", 00:13:51.684 "assigned_rate_limits": { 00:13:51.684 "rw_ios_per_sec": 0, 00:13:51.684 "rw_mbytes_per_sec": 0, 00:13:51.684 "r_mbytes_per_sec": 0, 00:13:51.684 "w_mbytes_per_sec": 0 00:13:51.684 }, 00:13:51.684 "claimed": false, 00:13:51.684 "zoned": false, 00:13:51.684 "supported_io_types": { 00:13:51.684 "read": true, 00:13:51.684 "write": true, 00:13:51.684 "unmap": false, 00:13:51.684 "flush": false, 00:13:51.684 "reset": true, 00:13:51.684 "nvme_admin": false, 00:13:51.684 "nvme_io": false, 00:13:51.684 "nvme_io_md": false, 00:13:51.684 "write_zeroes": true, 00:13:51.684 "zcopy": false, 00:13:51.684 "get_zone_info": false, 00:13:51.684 "zone_management": false, 00:13:51.684 "zone_append": false, 00:13:51.684 "compare": false, 00:13:51.684 "compare_and_write": false, 00:13:51.684 "abort": true, 00:13:51.684 "seek_hole": false, 00:13:51.684 "seek_data": false, 00:13:51.684 "copy": false, 00:13:51.684 "nvme_iov_md": false 00:13:51.684 }, 00:13:51.684 "driver_specific": {} 00:13:51.684 } 00:13:51.684 ] 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:13:51.684 15:07:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:13:51.684 Running I/O for 60 seconds... 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 79463.99 317855.95 0.00 0.00 321536.00 0.00 0.00 ' 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=79463.99 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 79463 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=79463 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=19000 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 19000 -gt 1000 ']' 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 19000 Malloc_0 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 19000 IOPS Malloc_0 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.954 15:07:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:56.954 ************************************ 00:13:56.954 START TEST bdev_qos_iops 00:13:56.954 ************************************ 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 19000 IOPS Malloc_0 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=19000 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:13:56.954 15:07:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 19000.38 76001.50 0.00 0.00 77216.00 0.00 0.00 ' 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=19000.38 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 19000 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=19000 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:14:02.224 ************************************ 00:14:02.224 END TEST bdev_qos_iops 00:14:02.224 ************************************ 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=17100 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=20900 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 19000 -lt 17100 ']' 00:14:02.224 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 19000 -gt 20900 ']' 00:14:02.224 00:14:02.224 real 0m5.228s 00:14:02.225 user 0m0.125s 00:14:02.225 sys 0m0.050s 00:14:02.225 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.225 15:07:57 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:14:02.225 15:07:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:14:02.225 15:07:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:14:02.225 15:07:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:14:02.225 15:07:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:14:02.225 15:07:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:02.225 15:07:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:14:02.225 15:07:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:02.225 15:07:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 31052.98 124211.91 0.00 0.00 125952.00 0.00 0.00 ' 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=125952.00 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 125952 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=125952 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=12 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 12 -lt 2 ']' 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.495 15:08:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:07.495 ************************************ 00:14:07.495 START TEST bdev_qos_bw 00:14:07.495 ************************************ 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 12 BANDWIDTH Null_1 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=12 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:14:07.495 15:08:02 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 3072.06 12288.23 0.00 0.00 12592.00 0.00 0.00 ' 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=12592.00 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 12592 00:14:12.798 ************************************ 00:14:12.798 END TEST bdev_qos_bw 00:14:12.798 ************************************ 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=12592 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=12288 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=11059 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=13516 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 12592 -lt 11059 ']' 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 12592 -gt 13516 ']' 00:14:12.798 00:14:12.798 real 0m5.263s 00:14:12.798 user 0m0.126s 00:14:12.798 sys 0m0.040s 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.798 15:08:07 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:14:12.798 15:08:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:14:12.798 15:08:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:12.798 15:08:07 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.798 15:08:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:12.798 15:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.798 15:08:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:12.798 15:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:12.798 15:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.798 15:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:12.798 ************************************ 00:14:12.798 START TEST bdev_qos_ro_bw 00:14:12.798 ************************************ 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:14:12.798 15:08:08 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.28 2049.13 0.00 0.00 2064.00 0.00 0.00 ' 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2064.00 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2064 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2064 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -lt 1843 ']' 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -gt 2252 ']' 00:14:18.063 00:14:18.063 real 0m5.196s 00:14:18.063 user 0m0.136s 00:14:18.063 sys 0m0.043s 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.063 15:08:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:14:18.063 ************************************ 00:14:18.063 END TEST bdev_qos_ro_bw 00:14:18.063 ************************************ 00:14:18.063 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:14:18.063 15:08:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:18.063 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.063 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:18.668 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.668 15:08:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:14:18.668 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.668 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:18.668 00:14:18.668 Latency(us) 00:14:18.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.668 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:18.668 Malloc_0 : 26.79 26519.49 103.59 0.00 0.00 9560.80 2137.72 503316.48 00:14:18.668 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:18.668 Null_1 : 26.90 28580.15 111.64 0.00 0.00 8937.95 612.45 103858.96 00:14:18.668 =================================================================================================================== 00:14:18.668 Total : 55099.64 215.23 0.00 0.00 9237.12 612.45 503316.48 00:14:18.668 0 00:14:18.668 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 86266 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 86266 ']' 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 86266 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86266 00:14:18.669 killing process with pid 86266 00:14:18.669 Received shutdown signal, test time was about 26.948733 seconds 00:14:18.669 00:14:18.669 Latency(us) 00:14:18.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.669 =================================================================================================================== 00:14:18.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86266' 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 86266 00:14:18.669 15:08:13 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 86266 00:14:18.928 ************************************ 00:14:18.928 END TEST bdev_qos 00:14:18.928 ************************************ 00:14:18.928 15:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:14:18.928 00:14:18.928 real 0m28.415s 00:14:18.928 user 0m29.224s 00:14:18.928 sys 0m0.777s 00:14:18.928 15:08:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.928 15:08:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:18.928 15:08:14 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:18.928 15:08:14 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:18.928 15:08:14 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:18.928 15:08:14 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.928 15:08:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:18.928 ************************************ 00:14:18.928 START TEST bdev_qd_sampling 00:14:18.928 ************************************ 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=86669 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 86669' 00:14:18.928 Process bdev QD sampling period testing pid: 86669 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 86669 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 86669 ']' 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.928 15:08:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:18.928 [2024-07-23 15:08:14.317841] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:18.928 [2024-07-23 15:08:14.318050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86669 ] 00:14:19.187 [2024-07-23 15:08:14.472379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:19.187 [2024-07-23 15:08:14.531350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.187 [2024-07-23 15:08:14.531437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.121 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.121 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:14:20.121 15:08:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:20.121 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.121 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:20.121 Malloc_QD 00:14:20.121 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.121 15:08:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:14:20.121 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:20.122 [ 00:14:20.122 { 00:14:20.122 "name": "Malloc_QD", 00:14:20.122 "aliases": [ 00:14:20.122 "61206c77-46db-4bd3-b8c8-9709b437e717" 00:14:20.122 ], 00:14:20.122 "product_name": "Malloc disk", 00:14:20.122 "block_size": 512, 00:14:20.122 "num_blocks": 262144, 00:14:20.122 "uuid": "61206c77-46db-4bd3-b8c8-9709b437e717", 00:14:20.122 "assigned_rate_limits": { 00:14:20.122 "rw_ios_per_sec": 0, 00:14:20.122 "rw_mbytes_per_sec": 0, 00:14:20.122 "r_mbytes_per_sec": 0, 00:14:20.122 "w_mbytes_per_sec": 0 00:14:20.122 }, 00:14:20.122 "claimed": false, 00:14:20.122 "zoned": false, 00:14:20.122 "supported_io_types": { 00:14:20.122 "read": true, 00:14:20.122 "write": true, 00:14:20.122 "unmap": true, 00:14:20.122 "flush": true, 00:14:20.122 "reset": true, 00:14:20.122 "nvme_admin": false, 00:14:20.122 "nvme_io": false, 00:14:20.122 "nvme_io_md": false, 00:14:20.122 "write_zeroes": true, 00:14:20.122 "zcopy": true, 00:14:20.122 "get_zone_info": false, 00:14:20.122 "zone_management": false, 00:14:20.122 "zone_append": false, 00:14:20.122 "compare": false, 00:14:20.122 "compare_and_write": false, 00:14:20.122 "abort": true, 00:14:20.122 "seek_hole": false, 00:14:20.122 "seek_data": false, 00:14:20.122 "copy": true, 00:14:20.122 "nvme_iov_md": false 00:14:20.122 }, 00:14:20.122 "memory_domains": [ 00:14:20.122 { 00:14:20.122 "dma_device_id": "system", 00:14:20.122 "dma_device_type": 1 00:14:20.122 }, 00:14:20.122 { 00:14:20.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.122 "dma_device_type": 2 00:14:20.122 } 00:14:20.122 ], 00:14:20.122 "driver_specific": {} 00:14:20.122 } 00:14:20.122 ] 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:14:20.122 15:08:15 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:20.122 Running I/O for 5 seconds... 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:14:22.027 "tick_rate": 2100000000, 00:14:22.027 "ticks": 1625411535594, 00:14:22.027 "bdevs": [ 00:14:22.027 { 00:14:22.027 "name": "Malloc_QD", 00:14:22.027 "bytes_read": 907055616, 00:14:22.027 "num_read_ops": 221443, 00:14:22.027 "bytes_written": 0, 00:14:22.027 "num_write_ops": 0, 00:14:22.027 "bytes_unmapped": 0, 00:14:22.027 "num_unmap_ops": 0, 00:14:22.027 "bytes_copied": 0, 00:14:22.027 "num_copy_ops": 0, 00:14:22.027 "read_latency_ticks": 2071699043064, 00:14:22.027 "max_read_latency_ticks": 11538864, 00:14:22.027 "min_read_latency_ticks": 287460, 00:14:22.027 "write_latency_ticks": 0, 00:14:22.027 "max_write_latency_ticks": 0, 00:14:22.027 "min_write_latency_ticks": 0, 00:14:22.027 "unmap_latency_ticks": 0, 00:14:22.027 "max_unmap_latency_ticks": 0, 00:14:22.027 "min_unmap_latency_ticks": 0, 00:14:22.027 "copy_latency_ticks": 0, 00:14:22.027 "max_copy_latency_ticks": 0, 00:14:22.027 "min_copy_latency_ticks": 0, 00:14:22.027 "io_error": {}, 00:14:22.027 "queue_depth_polling_period": 10, 00:14:22.027 "queue_depth": 512, 00:14:22.027 "io_time": 30, 00:14:22.027 "weighted_io_time": 15360 00:14:22.027 } 00:14:22.027 ] 00:14:22.027 }' 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:22.027 00:14:22.027 Latency(us) 00:14:22.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.027 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:22.027 Malloc_QD : 1.97 57016.46 222.72 0.00 0.00 4478.63 1170.29 5024.43 00:14:22.027 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:22.027 Malloc_QD : 1.97 57723.63 225.48 0.00 0.00 4424.25 776.29 5523.75 00:14:22.027 =================================================================================================================== 00:14:22.027 Total : 114740.08 448.20 0.00 0.00 4451.25 776.29 5523.75 00:14:22.027 0 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 86669 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 86669 ']' 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 86669 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.027 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86669 00:14:22.286 killing process with pid 86669 00:14:22.286 Received shutdown signal, test time was about 2.040906 seconds 00:14:22.286 00:14:22.286 Latency(us) 00:14:22.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.286 =================================================================================================================== 00:14:22.286 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.286 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:22.286 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:22.286 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86669' 00:14:22.286 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 86669 00:14:22.286 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 86669 00:14:22.546 15:08:17 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:14:22.546 00:14:22.546 real 0m3.495s 00:14:22.546 user 0m6.744s 00:14:22.546 sys 0m0.439s 00:14:22.546 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:22.546 15:08:17 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:22.546 ************************************ 00:14:22.546 END TEST bdev_qd_sampling 00:14:22.546 ************************************ 00:14:22.546 15:08:17 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:22.546 15:08:17 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:14:22.546 15:08:17 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:22.546 15:08:17 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.546 15:08:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:22.546 ************************************ 00:14:22.546 START TEST bdev_error 00:14:22.546 ************************************ 00:14:22.546 15:08:17 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:14:22.546 15:08:17 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:14:22.546 15:08:17 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:14:22.546 15:08:17 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:14:22.546 15:08:17 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=86741 00:14:22.546 Process error testing pid: 86741 00:14:22.546 15:08:17 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 86741' 00:14:22.546 15:08:17 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 86741 00:14:22.546 15:08:17 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:22.546 15:08:17 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 86741 ']' 00:14:22.546 15:08:17 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.546 15:08:17 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.546 15:08:17 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.546 15:08:17 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.546 15:08:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:22.546 [2024-07-23 15:08:17.872265] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:22.546 [2024-07-23 15:08:17.872452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86741 ] 00:14:22.805 [2024-07-23 15:08:18.020901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.805 [2024-07-23 15:08:18.067252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:14:23.373 15:08:18 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.373 Dev_1 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.373 15:08:18 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.373 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.373 [ 00:14:23.373 { 00:14:23.373 "name": "Dev_1", 00:14:23.373 "aliases": [ 00:14:23.373 "93e5f647-7540-4a85-8f0f-2126941de17a" 00:14:23.373 ], 00:14:23.373 "product_name": "Malloc disk", 00:14:23.373 "block_size": 512, 00:14:23.373 "num_blocks": 262144, 00:14:23.373 "uuid": "93e5f647-7540-4a85-8f0f-2126941de17a", 00:14:23.373 "assigned_rate_limits": { 00:14:23.373 "rw_ios_per_sec": 0, 00:14:23.373 "rw_mbytes_per_sec": 0, 00:14:23.373 "r_mbytes_per_sec": 0, 00:14:23.373 "w_mbytes_per_sec": 0 00:14:23.373 }, 00:14:23.373 "claimed": false, 00:14:23.373 "zoned": false, 00:14:23.373 "supported_io_types": { 00:14:23.373 "read": true, 00:14:23.373 "write": true, 00:14:23.373 "unmap": true, 00:14:23.373 "flush": true, 00:14:23.373 "reset": true, 00:14:23.373 "nvme_admin": false, 00:14:23.373 "nvme_io": false, 00:14:23.374 "nvme_io_md": false, 00:14:23.374 "write_zeroes": true, 00:14:23.374 "zcopy": true, 00:14:23.374 "get_zone_info": false, 00:14:23.374 "zone_management": false, 00:14:23.374 "zone_append": false, 00:14:23.374 "compare": false, 00:14:23.374 "compare_and_write": false, 00:14:23.374 "abort": true, 00:14:23.374 "seek_hole": false, 00:14:23.374 "seek_data": false, 00:14:23.374 "copy": true, 00:14:23.374 "nvme_iov_md": false 00:14:23.374 }, 00:14:23.374 "memory_domains": [ 00:14:23.374 { 00:14:23.374 "dma_device_id": "system", 00:14:23.374 "dma_device_type": 1 00:14:23.374 }, 00:14:23.374 { 00:14:23.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.374 "dma_device_type": 2 00:14:23.374 } 00:14:23.374 ], 00:14:23.374 "driver_specific": {} 00:14:23.374 } 00:14:23.374 ] 00:14:23.374 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.374 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:14:23.374 15:08:18 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:14:23.374 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.374 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.633 true 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.633 15:08:18 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.633 Dev_2 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.633 15:08:18 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.633 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.633 [ 00:14:23.633 { 00:14:23.633 "name": "Dev_2", 00:14:23.633 "aliases": [ 00:14:23.633 "7b1288ea-e5e9-4d2d-97a0-bcf9074498b2" 00:14:23.633 ], 00:14:23.633 "product_name": "Malloc disk", 00:14:23.633 "block_size": 512, 00:14:23.633 "num_blocks": 262144, 00:14:23.633 "uuid": "7b1288ea-e5e9-4d2d-97a0-bcf9074498b2", 00:14:23.634 "assigned_rate_limits": { 00:14:23.634 "rw_ios_per_sec": 0, 00:14:23.634 "rw_mbytes_per_sec": 0, 00:14:23.634 "r_mbytes_per_sec": 0, 00:14:23.634 "w_mbytes_per_sec": 0 00:14:23.634 }, 00:14:23.634 "claimed": false, 00:14:23.634 "zoned": false, 00:14:23.634 "supported_io_types": { 00:14:23.634 "read": true, 00:14:23.634 "write": true, 00:14:23.634 "unmap": true, 00:14:23.634 "flush": true, 00:14:23.634 "reset": true, 00:14:23.634 "nvme_admin": false, 00:14:23.634 "nvme_io": false, 00:14:23.634 "nvme_io_md": false, 00:14:23.634 "write_zeroes": true, 00:14:23.634 "zcopy": true, 00:14:23.634 "get_zone_info": false, 00:14:23.634 "zone_management": false, 00:14:23.634 "zone_append": false, 00:14:23.634 "compare": false, 00:14:23.634 "compare_and_write": false, 00:14:23.634 "abort": true, 00:14:23.634 "seek_hole": false, 00:14:23.634 "seek_data": false, 00:14:23.634 "copy": true, 00:14:23.634 "nvme_iov_md": false 00:14:23.634 }, 00:14:23.634 "memory_domains": [ 00:14:23.634 { 00:14:23.634 "dma_device_id": "system", 00:14:23.634 "dma_device_type": 1 00:14:23.634 }, 00:14:23.634 { 00:14:23.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.634 "dma_device_type": 2 00:14:23.634 } 00:14:23.634 ], 00:14:23.634 "driver_specific": {} 00:14:23.634 } 00:14:23.634 ] 00:14:23.634 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.634 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:14:23.634 15:08:18 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:23.634 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.634 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.634 15:08:18 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.634 15:08:18 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:14:23.634 15:08:18 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:23.634 Running I/O for 5 seconds... 00:14:24.570 15:08:19 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 86741 00:14:24.570 Process is existed as continue on error is set. Pid: 86741 00:14:24.570 15:08:19 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 86741' 00:14:24.570 15:08:19 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:24.570 15:08:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.570 15:08:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:24.570 15:08:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.570 15:08:19 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:24.570 15:08:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.570 15:08:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:24.570 15:08:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.570 15:08:19 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:14:24.570 Timeout while waiting for response: 00:14:24.570 00:14:24.570 00:14:28.760 00:14:28.760 Latency(us) 00:14:28.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.760 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:28.760 EE_Dev_1 : 0.93 46435.94 181.39 5.40 0.00 341.89 153.11 795.79 00:14:28.760 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:28.760 Dev_2 : 5.00 101046.02 394.71 0.00 0.00 155.71 54.37 18974.23 00:14:28.760 =================================================================================================================== 00:14:28.760 Total : 147481.96 576.10 5.40 0.00 170.31 54.37 18974.23 00:14:29.696 15:08:24 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 86741 00:14:29.696 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 86741 ']' 00:14:29.696 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 86741 00:14:29.696 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:14:29.697 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.697 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86741 00:14:29.697 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:29.697 killing process with pid 86741 00:14:29.697 Received shutdown signal, test time was about 5.000000 seconds 00:14:29.697 00:14:29.697 Latency(us) 00:14:29.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.697 =================================================================================================================== 00:14:29.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.697 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:29.697 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86741' 00:14:29.697 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 86741 00:14:29.697 15:08:24 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 86741 00:14:29.955 15:08:25 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=86831 00:14:29.955 Process error testing pid: 86831 00:14:29.955 15:08:25 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:29.955 15:08:25 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 86831' 00:14:29.955 15:08:25 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 86831 00:14:29.955 15:08:25 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 86831 ']' 00:14:29.955 15:08:25 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.955 15:08:25 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.955 15:08:25 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.955 15:08:25 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.955 15:08:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.955 [2024-07-23 15:08:25.296101] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:29.955 [2024-07-23 15:08:25.296334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86831 ] 00:14:30.213 [2024-07-23 15:08:25.448667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.213 [2024-07-23 15:08:25.495620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:14:30.785 15:08:26 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:30.785 Dev_1 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.785 15:08:26 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.785 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:30.785 [ 00:14:30.785 { 00:14:31.045 "name": "Dev_1", 00:14:31.045 "aliases": [ 00:14:31.045 "e942acb8-9168-4f37-a344-977a44872bc5" 00:14:31.045 ], 00:14:31.045 "product_name": "Malloc disk", 00:14:31.045 "block_size": 512, 00:14:31.045 "num_blocks": 262144, 00:14:31.045 "uuid": "e942acb8-9168-4f37-a344-977a44872bc5", 00:14:31.045 "assigned_rate_limits": { 00:14:31.045 "rw_ios_per_sec": 0, 00:14:31.045 "rw_mbytes_per_sec": 0, 00:14:31.045 "r_mbytes_per_sec": 0, 00:14:31.045 "w_mbytes_per_sec": 0 00:14:31.045 }, 00:14:31.045 "claimed": false, 00:14:31.045 "zoned": false, 00:14:31.045 "supported_io_types": { 00:14:31.045 "read": true, 00:14:31.045 "write": true, 00:14:31.045 "unmap": true, 00:14:31.045 "flush": true, 00:14:31.045 "reset": true, 00:14:31.045 "nvme_admin": false, 00:14:31.045 "nvme_io": false, 00:14:31.045 "nvme_io_md": false, 00:14:31.045 "write_zeroes": true, 00:14:31.045 "zcopy": true, 00:14:31.045 "get_zone_info": false, 00:14:31.045 "zone_management": false, 00:14:31.045 "zone_append": false, 00:14:31.045 "compare": false, 00:14:31.045 "compare_and_write": false, 00:14:31.045 "abort": true, 00:14:31.045 "seek_hole": false, 00:14:31.045 "seek_data": false, 00:14:31.045 "copy": true, 00:14:31.045 "nvme_iov_md": false 00:14:31.045 }, 00:14:31.045 "memory_domains": [ 00:14:31.045 { 00:14:31.045 "dma_device_id": "system", 00:14:31.045 "dma_device_type": 1 00:14:31.045 }, 00:14:31.045 { 00:14:31.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.045 "dma_device_type": 2 00:14:31.045 } 00:14:31.045 ], 00:14:31.045 "driver_specific": {} 00:14:31.045 } 00:14:31.045 ] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:14:31.045 15:08:26 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:31.045 true 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:31.045 Dev_2 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:31.045 [ 00:14:31.045 { 00:14:31.045 "name": "Dev_2", 00:14:31.045 "aliases": [ 00:14:31.045 "005971c3-220b-4075-bb4b-1616daf01a45" 00:14:31.045 ], 00:14:31.045 "product_name": "Malloc disk", 00:14:31.045 "block_size": 512, 00:14:31.045 "num_blocks": 262144, 00:14:31.045 "uuid": "005971c3-220b-4075-bb4b-1616daf01a45", 00:14:31.045 "assigned_rate_limits": { 00:14:31.045 "rw_ios_per_sec": 0, 00:14:31.045 "rw_mbytes_per_sec": 0, 00:14:31.045 "r_mbytes_per_sec": 0, 00:14:31.045 "w_mbytes_per_sec": 0 00:14:31.045 }, 00:14:31.045 "claimed": false, 00:14:31.045 "zoned": false, 00:14:31.045 "supported_io_types": { 00:14:31.045 "read": true, 00:14:31.045 "write": true, 00:14:31.045 "unmap": true, 00:14:31.045 "flush": true, 00:14:31.045 "reset": true, 00:14:31.045 "nvme_admin": false, 00:14:31.045 "nvme_io": false, 00:14:31.045 "nvme_io_md": false, 00:14:31.045 "write_zeroes": true, 00:14:31.045 "zcopy": true, 00:14:31.045 "get_zone_info": false, 00:14:31.045 "zone_management": false, 00:14:31.045 "zone_append": false, 00:14:31.045 "compare": false, 00:14:31.045 "compare_and_write": false, 00:14:31.045 "abort": true, 00:14:31.045 "seek_hole": false, 00:14:31.045 "seek_data": false, 00:14:31.045 "copy": true, 00:14:31.045 "nvme_iov_md": false 00:14:31.045 }, 00:14:31.045 "memory_domains": [ 00:14:31.045 { 00:14:31.045 "dma_device_id": "system", 00:14:31.045 "dma_device_type": 1 00:14:31.045 }, 00:14:31.045 { 00:14:31.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.045 "dma_device_type": 2 00:14:31.045 } 00:14:31.045 ], 00:14:31.045 "driver_specific": {} 00:14:31.045 } 00:14:31.045 ] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:14:31.045 15:08:26 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:31.045 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.045 15:08:26 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 86831 00:14:31.046 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:14:31.046 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 86831 00:14:31.046 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:14:31.046 15:08:26 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:31.046 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:31.046 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:14:31.046 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:31.046 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 86831 00:14:31.046 Running I/O for 5 seconds... 00:14:31.046 task offset: 115376 on job bdev=EE_Dev_1 fails 00:14:31.046 00:14:31.046 Latency(us) 00:14:31.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.046 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:31.046 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:31.046 EE_Dev_1 : 0.00 28460.54 111.17 6468.31 0.00 382.17 142.38 690.47 00:14:31.046 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:31.046 Dev_2 : 0.00 21262.46 83.06 0.00 0.00 516.75 148.24 932.33 00:14:31.046 =================================================================================================================== 00:14:31.046 Total : 49723.00 194.23 6468.31 0.00 455.16 142.38 932.33 00:14:31.046 request: 00:14:31.046 { 00:14:31.046 "method": "perform_tests", 00:14:31.046 "req_id": 1 00:14:31.046 } 00:14:31.046 Got JSON-RPC error response 00:14:31.046 response: 00:14:31.046 { 00:14:31.046 "code": -32603, 00:14:31.046 "message": "bdevperf failed with error Operation not permitted" 00:14:31.046 } 00:14:31.046 [2024-07-23 15:08:26.411227] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:31.304 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:14:31.304 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:31.304 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:14:31.304 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:14:31.304 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:14:31.304 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:31.304 00:14:31.304 real 0m8.926s 00:14:31.304 user 0m9.058s 00:14:31.304 sys 0m0.787s 00:14:31.304 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.304 15:08:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:31.304 ************************************ 00:14:31.304 END TEST bdev_error 00:14:31.304 ************************************ 00:14:31.563 15:08:26 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:31.563 15:08:26 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:14:31.563 15:08:26 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.563 15:08:26 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.563 15:08:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:31.563 ************************************ 00:14:31.563 START TEST bdev_stat 00:14:31.563 ************************************ 00:14:31.563 Process Bdev IO statistics testing pid: 86867 00:14:31.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.563 15:08:26 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:14:31.563 15:08:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:14:31.563 15:08:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=86867 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 86867' 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 86867 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 86867 ']' 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:31.564 15:08:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:31.564 [2024-07-23 15:08:26.869018] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:31.564 [2024-07-23 15:08:26.869206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86867 ] 00:14:31.823 [2024-07-23 15:08:27.027278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:31.823 [2024-07-23 15:08:27.084441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.823 [2024-07-23 15:08:27.084530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:32.390 Malloc_STAT 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.390 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:32.390 [ 00:14:32.390 { 00:14:32.390 "name": "Malloc_STAT", 00:14:32.390 "aliases": [ 00:14:32.390 "6f378b1b-fd75-40e7-8317-6d1b434ea6a6" 00:14:32.390 ], 00:14:32.390 "product_name": "Malloc disk", 00:14:32.390 "block_size": 512, 00:14:32.390 "num_blocks": 262144, 00:14:32.390 "uuid": "6f378b1b-fd75-40e7-8317-6d1b434ea6a6", 00:14:32.390 "assigned_rate_limits": { 00:14:32.390 "rw_ios_per_sec": 0, 00:14:32.390 "rw_mbytes_per_sec": 0, 00:14:32.390 "r_mbytes_per_sec": 0, 00:14:32.390 "w_mbytes_per_sec": 0 00:14:32.390 }, 00:14:32.390 "claimed": false, 00:14:32.390 "zoned": false, 00:14:32.390 "supported_io_types": { 00:14:32.390 "read": true, 00:14:32.390 "write": true, 00:14:32.390 "unmap": true, 00:14:32.390 "flush": true, 00:14:32.390 "reset": true, 00:14:32.390 "nvme_admin": false, 00:14:32.390 "nvme_io": false, 00:14:32.390 "nvme_io_md": false, 00:14:32.390 "write_zeroes": true, 00:14:32.390 "zcopy": true, 00:14:32.390 "get_zone_info": false, 00:14:32.390 "zone_management": false, 00:14:32.390 "zone_append": false, 00:14:32.390 "compare": false, 00:14:32.390 "compare_and_write": false, 00:14:32.390 "abort": true, 00:14:32.390 "seek_hole": false, 00:14:32.390 "seek_data": false, 00:14:32.390 "copy": true, 00:14:32.390 "nvme_iov_md": false 00:14:32.390 }, 00:14:32.391 "memory_domains": [ 00:14:32.391 { 00:14:32.391 "dma_device_id": "system", 00:14:32.391 "dma_device_type": 1 00:14:32.391 }, 00:14:32.391 { 00:14:32.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.391 "dma_device_type": 2 00:14:32.391 } 00:14:32.391 ], 00:14:32.391 "driver_specific": {} 00:14:32.391 } 00:14:32.391 ] 00:14:32.391 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.391 15:08:27 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:14:32.391 15:08:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:14:32.391 15:08:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:32.649 Running I/O for 10 seconds... 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:14:34.556 "tick_rate": 2100000000, 00:14:34.556 "ticks": 1651556635994, 00:14:34.556 "bdevs": [ 00:14:34.556 { 00:14:34.556 "name": "Malloc_STAT", 00:14:34.556 "bytes_read": 894472704, 00:14:34.556 "num_read_ops": 218371, 00:14:34.556 "bytes_written": 0, 00:14:34.556 "num_write_ops": 0, 00:14:34.556 "bytes_unmapped": 0, 00:14:34.556 "num_unmap_ops": 0, 00:14:34.556 "bytes_copied": 0, 00:14:34.556 "num_copy_ops": 0, 00:14:34.556 "read_latency_ticks": 2070586886244, 00:14:34.556 "max_read_latency_ticks": 12884176, 00:14:34.556 "min_read_latency_ticks": 281814, 00:14:34.556 "write_latency_ticks": 0, 00:14:34.556 "max_write_latency_ticks": 0, 00:14:34.556 "min_write_latency_ticks": 0, 00:14:34.556 "unmap_latency_ticks": 0, 00:14:34.556 "max_unmap_latency_ticks": 0, 00:14:34.556 "min_unmap_latency_ticks": 0, 00:14:34.556 "copy_latency_ticks": 0, 00:14:34.556 "max_copy_latency_ticks": 0, 00:14:34.556 "min_copy_latency_ticks": 0, 00:14:34.556 "io_error": {} 00:14:34.556 } 00:14:34.556 ] 00:14:34.556 }' 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=218371 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:14:34.556 "tick_rate": 2100000000, 00:14:34.556 "ticks": 1651626943120, 00:14:34.556 "name": "Malloc_STAT", 00:14:34.556 "channels": [ 00:14:34.556 { 00:14:34.556 "thread_id": 2, 00:14:34.556 "bytes_read": 451936256, 00:14:34.556 "num_read_ops": 110336, 00:14:34.556 "bytes_written": 0, 00:14:34.556 "num_write_ops": 0, 00:14:34.556 "bytes_unmapped": 0, 00:14:34.556 "num_unmap_ops": 0, 00:14:34.556 "bytes_copied": 0, 00:14:34.556 "num_copy_ops": 0, 00:14:34.556 "read_latency_ticks": 1052719558146, 00:14:34.556 "max_read_latency_ticks": 11746880, 00:14:34.556 "min_read_latency_ticks": 7217544, 00:14:34.556 "write_latency_ticks": 0, 00:14:34.556 "max_write_latency_ticks": 0, 00:14:34.556 "min_write_latency_ticks": 0, 00:14:34.556 "unmap_latency_ticks": 0, 00:14:34.556 "max_unmap_latency_ticks": 0, 00:14:34.556 "min_unmap_latency_ticks": 0, 00:14:34.556 "copy_latency_ticks": 0, 00:14:34.556 "max_copy_latency_ticks": 0, 00:14:34.556 "min_copy_latency_ticks": 0 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "thread_id": 3, 00:14:34.556 "bytes_read": 458227712, 00:14:34.556 "num_read_ops": 111872, 00:14:34.556 "bytes_written": 0, 00:14:34.556 "num_write_ops": 0, 00:14:34.556 "bytes_unmapped": 0, 00:14:34.556 "num_unmap_ops": 0, 00:14:34.556 "bytes_copied": 0, 00:14:34.556 "num_copy_ops": 0, 00:14:34.556 "read_latency_ticks": 1054550926380, 00:14:34.556 "max_read_latency_ticks": 12884176, 00:14:34.556 "min_read_latency_ticks": 6732714, 00:14:34.556 "write_latency_ticks": 0, 00:14:34.556 "max_write_latency_ticks": 0, 00:14:34.556 "min_write_latency_ticks": 0, 00:14:34.556 "unmap_latency_ticks": 0, 00:14:34.556 "max_unmap_latency_ticks": 0, 00:14:34.556 "min_unmap_latency_ticks": 0, 00:14:34.556 "copy_latency_ticks": 0, 00:14:34.556 "max_copy_latency_ticks": 0, 00:14:34.556 "min_copy_latency_ticks": 0 00:14:34.556 } 00:14:34.556 ] 00:14:34.556 }' 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=110336 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=110336 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=111872 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=222208 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:14:34.556 "tick_rate": 2100000000, 00:14:34.556 "ticks": 1651736309150, 00:14:34.556 "bdevs": [ 00:14:34.556 { 00:14:34.556 "name": "Malloc_STAT", 00:14:34.556 "bytes_read": 934318592, 00:14:34.556 "num_read_ops": 228099, 00:14:34.556 "bytes_written": 0, 00:14:34.556 "num_write_ops": 0, 00:14:34.556 "bytes_unmapped": 0, 00:14:34.556 "num_unmap_ops": 0, 00:14:34.556 "bytes_copied": 0, 00:14:34.556 "num_copy_ops": 0, 00:14:34.556 "read_latency_ticks": 2164625779994, 00:14:34.556 "max_read_latency_ticks": 12884176, 00:14:34.556 "min_read_latency_ticks": 281814, 00:14:34.556 "write_latency_ticks": 0, 00:14:34.556 "max_write_latency_ticks": 0, 00:14:34.556 "min_write_latency_ticks": 0, 00:14:34.556 "unmap_latency_ticks": 0, 00:14:34.556 "max_unmap_latency_ticks": 0, 00:14:34.556 "min_unmap_latency_ticks": 0, 00:14:34.556 "copy_latency_ticks": 0, 00:14:34.556 "max_copy_latency_ticks": 0, 00:14:34.556 "min_copy_latency_ticks": 0, 00:14:34.556 "io_error": {} 00:14:34.556 } 00:14:34.556 ] 00:14:34.556 }' 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=228099 00:14:34.556 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 222208 -lt 218371 ']' 00:14:34.557 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 222208 -gt 228099 ']' 00:14:34.557 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:34.557 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.557 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:34.557 00:14:34.557 Latency(us) 00:14:34.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.557 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:34.557 Malloc_STAT : 2.06 56214.13 219.59 0.00 0.00 4540.82 1778.83 5804.62 00:14:34.557 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:34.557 Malloc_STAT : 2.06 56713.93 221.54 0.00 0.00 4502.65 1497.97 6428.77 00:14:34.557 =================================================================================================================== 00:14:34.557 Total : 112928.06 441.13 0.00 0.00 4521.65 1497.97 6428.77 00:14:34.557 0 00:14:34.815 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.815 15:08:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 86867 00:14:34.815 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 86867 ']' 00:14:34.815 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 86867 00:14:34.815 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:14:34.815 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.815 15:08:29 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86867 00:14:34.815 killing process with pid 86867 00:14:34.816 Received shutdown signal, test time was about 2.130869 seconds 00:14:34.816 00:14:34.816 Latency(us) 00:14:34.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.816 =================================================================================================================== 00:14:34.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.816 15:08:30 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.816 15:08:30 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.816 15:08:30 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86867' 00:14:34.816 15:08:30 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 86867 00:14:34.816 15:08:30 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 86867 00:14:35.075 15:08:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:14:35.075 ************************************ 00:14:35.075 END TEST bdev_stat 00:14:35.075 ************************************ 00:14:35.075 00:14:35.075 real 0m3.481s 00:14:35.075 user 0m6.686s 00:14:35.075 sys 0m0.437s 00:14:35.075 15:08:30 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.075 15:08:30 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:35.075 15:08:30 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:14:35.075 15:08:30 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:14:35.075 00:14:35.075 real 1m52.625s 00:14:35.075 user 5m8.943s 00:14:35.075 sys 0m24.088s 00:14:35.075 ************************************ 00:14:35.075 END TEST blockdev_general 00:14:35.075 ************************************ 00:14:35.075 15:08:30 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.075 15:08:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:35.075 15:08:30 -- common/autotest_common.sh@1142 -- # return 0 00:14:35.075 15:08:30 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:35.075 15:08:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:35.075 15:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.075 15:08:30 -- common/autotest_common.sh@10 -- # set +x 00:14:35.075 ************************************ 00:14:35.075 START TEST bdev_raid 00:14:35.075 ************************************ 00:14:35.075 15:08:30 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:35.075 * Looking for test storage... 00:14:35.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:35.075 15:08:30 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:35.075 15:08:30 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:14:35.075 15:08:30 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:35.075 15:08:30 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:14:35.075 15:08:30 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:14:35.075 15:08:30 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:14:35.075 15:08:30 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:14:35.335 15:08:30 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:14:35.335 15:08:30 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:14:35.335 15:08:30 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:14:35.335 15:08:30 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:14:35.335 15:08:30 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:35.335 15:08:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.335 15:08:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.335 15:08:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.335 ************************************ 00:14:35.335 START TEST raid_function_test_raid0 00:14:35.335 ************************************ 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:35.335 Process raid pid: 87000 00:14:35.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=87000 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 87000' 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 87000 /var/tmp/spdk-raid.sock 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 87000 ']' 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.335 15:08:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:35.335 [2024-07-23 15:08:30.597258] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:35.335 [2024-07-23 15:08:30.597681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.335 [2024-07-23 15:08:30.751428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.593 [2024-07-23 15:08:30.796338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.593 [2024-07-23 15:08:30.841832] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.161 15:08:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.161 15:08:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:14:36.161 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:14:36.161 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:14:36.161 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:36.161 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:14:36.161 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:36.420 [2024-07-23 15:08:31.702695] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:36.420 [2024-07-23 15:08:31.705477] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:36.420 [2024-07-23 15:08:31.705667] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:14:36.420 [2024-07-23 15:08:31.705776] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:36.420 Base_1 00:14:36.420 Base_2 00:14:36.420 [2024-07-23 15:08:31.705991] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:14:36.420 [2024-07-23 15:08:31.706383] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:14:36.420 [2024-07-23 15:08:31.706398] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006080 00:14:36.420 [2024-07-23 15:08:31.706565] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.420 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:36.420 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:36.420 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.679 15:08:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:36.939 [2024-07-23 15:08:32.146845] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:14:36.939 /dev/nbd0 00:14:36.939 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.939 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.939 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:36.939 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:14:36.939 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:36.939 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:36.939 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:36.939 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.940 1+0 records in 00:14:36.940 1+0 records out 00:14:36.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00200836 s, 2.0 MB/s 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:36.940 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:37.219 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:37.219 { 00:14:37.219 "nbd_device": "/dev/nbd0", 00:14:37.219 "bdev_name": "raid" 00:14:37.219 } 00:14:37.219 ]' 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:37.220 { 00:14:37.220 "nbd_device": "/dev/nbd0", 00:14:37.220 "bdev_name": "raid" 00:14:37.220 } 00:14:37.220 ]' 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:37.220 4096+0 records in 00:14:37.220 4096+0 records out 00:14:37.220 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0213393 s, 98.3 MB/s 00:14:37.220 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:37.497 4096+0 records in 00:14:37.497 4096+0 records out 00:14:37.497 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.2874 s, 7.3 MB/s 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:37.497 128+0 records in 00:14:37.497 128+0 records out 00:14:37.497 65536 bytes (66 kB, 64 KiB) copied, 0.000955065 s, 68.6 MB/s 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:37.497 2035+0 records in 00:14:37.497 2035+0 records out 00:14:37.497 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00777228 s, 134 MB/s 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:37.497 456+0 records in 00:14:37.497 456+0 records out 00:14:37.497 233472 bytes (233 kB, 228 KiB) copied, 0.00181293 s, 129 MB/s 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.497 15:08:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:37.757 [2024-07-23 15:08:33.068090] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:37.757 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 87000 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 87000 ']' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 87000 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87000 00:14:38.016 killing process with pid 87000 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87000' 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 87000 00:14:38.016 [2024-07-23 15:08:33.397526] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.016 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 87000 00:14:38.016 [2024-07-23 15:08:33.397638] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.016 [2024-07-23 15:08:33.397702] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.016 [2024-07-23 15:08:33.397723] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name raid, state offline 00:14:38.016 [2024-07-23 15:08:33.422103] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.275 15:08:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:14:38.275 00:14:38.275 ************************************ 00:14:38.275 END TEST raid_function_test_raid0 00:14:38.275 ************************************ 00:14:38.275 real 0m3.141s 00:14:38.275 user 0m4.059s 00:14:38.275 sys 0m1.027s 00:14:38.275 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.275 15:08:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:38.533 15:08:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:38.533 15:08:33 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:14:38.533 15:08:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.533 15:08:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.533 15:08:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.533 ************************************ 00:14:38.533 START TEST raid_function_test_concat 00:14:38.533 ************************************ 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:38.533 Process raid pid: 87134 00:14:38.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=87134 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 87134' 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 87134 /var/tmp/spdk-raid.sock 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 87134 ']' 00:14:38.533 15:08:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:38.534 15:08:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:38.534 15:08:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.534 15:08:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:38.534 15:08:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.534 15:08:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:38.534 [2024-07-23 15:08:33.790466] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:38.534 [2024-07-23 15:08:33.791188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.534 [2024-07-23 15:08:33.944081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.792 [2024-07-23 15:08:33.988723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.792 [2024-07-23 15:08:34.033543] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.360 15:08:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.360 15:08:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:14:39.360 15:08:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:14:39.360 15:08:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:14:39.360 15:08:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:39.360 15:08:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:14:39.360 15:08:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:39.620 [2024-07-23 15:08:34.892750] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:39.620 [2024-07-23 15:08:34.895754] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:39.620 Base_1 00:14:39.620 Base_2 00:14:39.620 [2024-07-23 15:08:34.896034] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:14:39.620 [2024-07-23 15:08:34.896065] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:39.620 [2024-07-23 15:08:34.896215] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:14:39.620 [2024-07-23 15:08:34.896673] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:14:39.620 [2024-07-23 15:08:34.896690] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006080 00:14:39.620 [2024-07-23 15:08:34.896941] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.620 15:08:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:39.620 15:08:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:39.620 15:08:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:39.879 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:40.138 [2024-07-23 15:08:35.333077] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:14:40.138 /dev/nbd0 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.138 1+0 records in 00:14:40.138 1+0 records out 00:14:40.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303129 s, 13.5 MB/s 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:40.138 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:40.398 { 00:14:40.398 "nbd_device": "/dev/nbd0", 00:14:40.398 "bdev_name": "raid" 00:14:40.398 } 00:14:40.398 ]' 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:40.398 { 00:14:40.398 "nbd_device": "/dev/nbd0", 00:14:40.398 "bdev_name": "raid" 00:14:40.398 } 00:14:40.398 ]' 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:40.398 4096+0 records in 00:14:40.398 4096+0 records out 00:14:40.398 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0195731 s, 107 MB/s 00:14:40.398 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:40.658 4096+0 records in 00:14:40.658 4096+0 records out 00:14:40.658 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.248563 s, 8.4 MB/s 00:14:40.658 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:40.658 15:08:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:40.658 128+0 records in 00:14:40.658 128+0 records out 00:14:40.658 65536 bytes (66 kB, 64 KiB) copied, 0.000931018 s, 70.4 MB/s 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:40.658 2035+0 records in 00:14:40.658 2035+0 records out 00:14:40.658 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00643644 s, 162 MB/s 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:40.658 456+0 records in 00:14:40.658 456+0 records out 00:14:40.658 233472 bytes (233 kB, 228 KiB) copied, 0.00242413 s, 96.3 MB/s 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.658 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:40.917 [2024-07-23 15:08:36.342938] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.917 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:41.176 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 87134 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 87134 ']' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 87134 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87134 00:14:41.436 killing process with pid 87134 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87134' 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 87134 00:14:41.436 [2024-07-23 15:08:36.691039] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.436 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 87134 00:14:41.436 [2024-07-23 15:08:36.691147] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.436 [2024-07-23 15:08:36.691207] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.436 [2024-07-23 15:08:36.691223] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name raid, state offline 00:14:41.436 [2024-07-23 15:08:36.714970] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.696 15:08:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:14:41.696 00:14:41.696 real 0m3.243s 00:14:41.696 user 0m4.185s 00:14:41.696 sys 0m1.133s 00:14:41.696 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.696 ************************************ 00:14:41.696 END TEST raid_function_test_concat 00:14:41.696 15:08:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:41.696 ************************************ 00:14:41.696 15:08:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:41.696 15:08:37 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:14:41.696 15:08:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:41.696 15:08:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.696 15:08:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.696 ************************************ 00:14:41.696 START TEST raid0_resize_test 00:14:41.696 ************************************ 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:14:41.696 Process raid pid: 87268 00:14:41.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=87268 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 87268' 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 87268 /var/tmp/spdk-raid.sock 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 87268 ']' 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.696 15:08:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.696 [2024-07-23 15:08:37.095304] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:41.696 [2024-07-23 15:08:37.095688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.955 [2024-07-23 15:08:37.250342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.955 [2024-07-23 15:08:37.295886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.955 [2024-07-23 15:08:37.340477] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.893 15:08:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.893 15:08:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:14:42.893 15:08:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:42.893 Base_1 00:14:42.893 15:08:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:43.152 Base_2 00:14:43.152 15:08:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:43.411 [2024-07-23 15:08:38.646132] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:43.411 [2024-07-23 15:08:38.648465] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:43.411 [2024-07-23 15:08:38.648538] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:14:43.411 [2024-07-23 15:08:38.648552] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:43.411 [2024-07-23 15:08:38.648695] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001de0 00:14:43.411 [2024-07-23 15:08:38.649028] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:14:43.411 [2024-07-23 15:08:38.649045] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000006080 00:14:43.411 [2024-07-23 15:08:38.649205] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.411 15:08:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:43.411 [2024-07-23 15:08:38.822174] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:43.411 [2024-07-23 15:08:38.822403] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:43.411 true 00:14:43.670 15:08:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:43.670 15:08:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:14:43.670 [2024-07-23 15:08:39.054401] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.670 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:14:43.670 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:14:43.670 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:14:43.670 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:43.928 [2024-07-23 15:08:39.234229] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:43.928 [2024-07-23 15:08:39.234268] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:43.928 [2024-07-23 15:08:39.234307] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:14:43.928 true 00:14:43.928 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:43.928 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:14:44.186 [2024-07-23 15:08:39.422456] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 87268 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 87268 ']' 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 87268 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87268 00:14:44.186 killing process with pid 87268 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:44.186 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87268' 00:14:44.187 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 87268 00:14:44.187 [2024-07-23 15:08:39.481320] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.187 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 87268 00:14:44.187 [2024-07-23 15:08:39.481452] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.187 [2024-07-23 15:08:39.481511] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.187 [2024-07-23 15:08:39.481522] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Raid, state offline 00:14:44.187 [2024-07-23 15:08:39.482037] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.445 15:08:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:14:44.445 00:14:44.445 real 0m2.694s 00:14:44.445 user 0m3.980s 00:14:44.445 sys 0m0.541s 00:14:44.445 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.445 15:08:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.445 ************************************ 00:14:44.445 END TEST raid0_resize_test 00:14:44.445 ************************************ 00:14:44.445 15:08:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:44.445 15:08:39 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:44.445 15:08:39 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:44.445 15:08:39 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:44.445 15:08:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:44.445 15:08:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.445 15:08:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.445 ************************************ 00:14:44.445 START TEST raid_state_function_test 00:14:44.445 ************************************ 00:14:44.445 15:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:14:44.445 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:44.445 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:44.445 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:44.445 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:44.445 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:44.445 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:44.446 Process raid pid: 87337 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=87337 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 87337' 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 87337 /var/tmp/spdk-raid.sock 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 87337 ']' 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:44.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.446 15:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.446 [2024-07-23 15:08:39.860108] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:44.446 [2024-07-23 15:08:39.860519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.704 [2024-07-23 15:08:40.015918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.704 [2024-07-23 15:08:40.063938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.704 [2024-07-23 15:08:40.109912] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.642 15:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.642 15:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:45.642 15:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:45.642 [2024-07-23 15:08:41.011511] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.642 [2024-07-23 15:08:41.011581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.642 [2024-07-23 15:08:41.011594] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.642 [2024-07-23 15:08:41.011607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.642 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.928 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.928 "name": "Existed_Raid", 00:14:45.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.928 "strip_size_kb": 64, 00:14:45.928 "state": "configuring", 00:14:45.928 "raid_level": "raid0", 00:14:45.928 "superblock": false, 00:14:45.928 "num_base_bdevs": 2, 00:14:45.928 "num_base_bdevs_discovered": 0, 00:14:45.928 "num_base_bdevs_operational": 2, 00:14:45.928 "base_bdevs_list": [ 00:14:45.928 { 00:14:45.928 "name": "BaseBdev1", 00:14:45.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.928 "is_configured": false, 00:14:45.928 "data_offset": 0, 00:14:45.928 "data_size": 0 00:14:45.928 }, 00:14:45.928 { 00:14:45.928 "name": "BaseBdev2", 00:14:45.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.928 "is_configured": false, 00:14:45.928 "data_offset": 0, 00:14:45.928 "data_size": 0 00:14:45.928 } 00:14:45.928 ] 00:14:45.928 }' 00:14:45.928 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.928 15:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.187 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:46.446 [2024-07-23 15:08:41.747542] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:46.446 [2024-07-23 15:08:41.747762] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:14:46.446 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:46.705 [2024-07-23 15:08:41.923619] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.705 [2024-07-23 15:08:41.923906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.705 [2024-07-23 15:08:41.923929] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.705 [2024-07-23 15:08:41.923945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.705 15:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.705 [2024-07-23 15:08:42.109176] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.705 BaseBdev1 00:14:46.705 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:46.705 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:46.705 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.705 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:46.705 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.705 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.705 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.964 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.224 [ 00:14:47.224 { 00:14:47.224 "name": "BaseBdev1", 00:14:47.224 "aliases": [ 00:14:47.224 "76bc83f8-8b38-44dd-a13c-6eb4251101cd" 00:14:47.224 ], 00:14:47.224 "product_name": "Malloc disk", 00:14:47.224 "block_size": 512, 00:14:47.224 "num_blocks": 65536, 00:14:47.224 "uuid": "76bc83f8-8b38-44dd-a13c-6eb4251101cd", 00:14:47.224 "assigned_rate_limits": { 00:14:47.224 "rw_ios_per_sec": 0, 00:14:47.224 "rw_mbytes_per_sec": 0, 00:14:47.224 "r_mbytes_per_sec": 0, 00:14:47.224 "w_mbytes_per_sec": 0 00:14:47.224 }, 00:14:47.224 "claimed": true, 00:14:47.224 "claim_type": "exclusive_write", 00:14:47.224 "zoned": false, 00:14:47.224 "supported_io_types": { 00:14:47.224 "read": true, 00:14:47.224 "write": true, 00:14:47.224 "unmap": true, 00:14:47.224 "flush": true, 00:14:47.224 "reset": true, 00:14:47.224 "nvme_admin": false, 00:14:47.224 "nvme_io": false, 00:14:47.224 "nvme_io_md": false, 00:14:47.224 "write_zeroes": true, 00:14:47.224 "zcopy": true, 00:14:47.224 "get_zone_info": false, 00:14:47.224 "zone_management": false, 00:14:47.224 "zone_append": false, 00:14:47.224 "compare": false, 00:14:47.224 "compare_and_write": false, 00:14:47.224 "abort": true, 00:14:47.224 "seek_hole": false, 00:14:47.224 "seek_data": false, 00:14:47.224 "copy": true, 00:14:47.224 "nvme_iov_md": false 00:14:47.224 }, 00:14:47.224 "memory_domains": [ 00:14:47.224 { 00:14:47.224 "dma_device_id": "system", 00:14:47.224 "dma_device_type": 1 00:14:47.224 }, 00:14:47.224 { 00:14:47.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.224 "dma_device_type": 2 00:14:47.224 } 00:14:47.224 ], 00:14:47.224 "driver_specific": {} 00:14:47.224 } 00:14:47.224 ] 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.224 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.483 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.483 "name": "Existed_Raid", 00:14:47.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.483 "strip_size_kb": 64, 00:14:47.483 "state": "configuring", 00:14:47.483 "raid_level": "raid0", 00:14:47.483 "superblock": false, 00:14:47.483 "num_base_bdevs": 2, 00:14:47.483 "num_base_bdevs_discovered": 1, 00:14:47.483 "num_base_bdevs_operational": 2, 00:14:47.483 "base_bdevs_list": [ 00:14:47.483 { 00:14:47.483 "name": "BaseBdev1", 00:14:47.483 "uuid": "76bc83f8-8b38-44dd-a13c-6eb4251101cd", 00:14:47.483 "is_configured": true, 00:14:47.483 "data_offset": 0, 00:14:47.483 "data_size": 65536 00:14:47.483 }, 00:14:47.483 { 00:14:47.483 "name": "BaseBdev2", 00:14:47.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.483 "is_configured": false, 00:14:47.483 "data_offset": 0, 00:14:47.483 "data_size": 0 00:14:47.483 } 00:14:47.483 ] 00:14:47.483 }' 00:14:47.483 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.483 15:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.741 15:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:48.000 [2024-07-23 15:08:43.213502] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.000 [2024-07-23 15:08:43.213756] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:48.000 [2024-07-23 15:08:43.393601] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.000 [2024-07-23 15:08:43.395930] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.000 [2024-07-23 15:08:43.396085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.000 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.259 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.259 "name": "Existed_Raid", 00:14:48.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.259 "strip_size_kb": 64, 00:14:48.259 "state": "configuring", 00:14:48.259 "raid_level": "raid0", 00:14:48.259 "superblock": false, 00:14:48.259 "num_base_bdevs": 2, 00:14:48.259 "num_base_bdevs_discovered": 1, 00:14:48.259 "num_base_bdevs_operational": 2, 00:14:48.259 "base_bdevs_list": [ 00:14:48.259 { 00:14:48.259 "name": "BaseBdev1", 00:14:48.259 "uuid": "76bc83f8-8b38-44dd-a13c-6eb4251101cd", 00:14:48.259 "is_configured": true, 00:14:48.259 "data_offset": 0, 00:14:48.259 "data_size": 65536 00:14:48.259 }, 00:14:48.259 { 00:14:48.259 "name": "BaseBdev2", 00:14:48.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.259 "is_configured": false, 00:14:48.259 "data_offset": 0, 00:14:48.259 "data_size": 0 00:14:48.259 } 00:14:48.259 ] 00:14:48.259 }' 00:14:48.259 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.259 15:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.518 15:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.776 [2024-07-23 15:08:44.108383] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.776 [2024-07-23 15:08:44.108668] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:14:48.776 [2024-07-23 15:08:44.108719] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.776 [2024-07-23 15:08:44.108969] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:14:48.776 [2024-07-23 15:08:44.109400] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:14:48.776 [2024-07-23 15:08:44.109524] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:14:48.776 BaseBdev2 00:14:48.776 [2024-07-23 15:08:44.109840] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.776 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:48.776 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:48.776 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.776 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:48.776 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.776 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.776 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.035 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.293 [ 00:14:49.293 { 00:14:49.293 "name": "BaseBdev2", 00:14:49.293 "aliases": [ 00:14:49.293 "80bfa6f3-5060-4848-971c-146dfa389ab0" 00:14:49.293 ], 00:14:49.293 "product_name": "Malloc disk", 00:14:49.293 "block_size": 512, 00:14:49.293 "num_blocks": 65536, 00:14:49.293 "uuid": "80bfa6f3-5060-4848-971c-146dfa389ab0", 00:14:49.293 "assigned_rate_limits": { 00:14:49.293 "rw_ios_per_sec": 0, 00:14:49.293 "rw_mbytes_per_sec": 0, 00:14:49.293 "r_mbytes_per_sec": 0, 00:14:49.293 "w_mbytes_per_sec": 0 00:14:49.293 }, 00:14:49.293 "claimed": true, 00:14:49.293 "claim_type": "exclusive_write", 00:14:49.293 "zoned": false, 00:14:49.293 "supported_io_types": { 00:14:49.293 "read": true, 00:14:49.293 "write": true, 00:14:49.293 "unmap": true, 00:14:49.293 "flush": true, 00:14:49.293 "reset": true, 00:14:49.293 "nvme_admin": false, 00:14:49.293 "nvme_io": false, 00:14:49.293 "nvme_io_md": false, 00:14:49.293 "write_zeroes": true, 00:14:49.293 "zcopy": true, 00:14:49.293 "get_zone_info": false, 00:14:49.293 "zone_management": false, 00:14:49.293 "zone_append": false, 00:14:49.293 "compare": false, 00:14:49.293 "compare_and_write": false, 00:14:49.293 "abort": true, 00:14:49.293 "seek_hole": false, 00:14:49.293 "seek_data": false, 00:14:49.293 "copy": true, 00:14:49.293 "nvme_iov_md": false 00:14:49.293 }, 00:14:49.293 "memory_domains": [ 00:14:49.293 { 00:14:49.293 "dma_device_id": "system", 00:14:49.293 "dma_device_type": 1 00:14:49.293 }, 00:14:49.293 { 00:14:49.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.293 "dma_device_type": 2 00:14:49.293 } 00:14:49.293 ], 00:14:49.293 "driver_specific": {} 00:14:49.293 } 00:14:49.293 ] 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.294 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.552 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.552 "name": "Existed_Raid", 00:14:49.552 "uuid": "397320a5-9d86-4669-a525-49aedf11df0b", 00:14:49.552 "strip_size_kb": 64, 00:14:49.552 "state": "online", 00:14:49.552 "raid_level": "raid0", 00:14:49.552 "superblock": false, 00:14:49.552 "num_base_bdevs": 2, 00:14:49.552 "num_base_bdevs_discovered": 2, 00:14:49.552 "num_base_bdevs_operational": 2, 00:14:49.552 "base_bdevs_list": [ 00:14:49.552 { 00:14:49.552 "name": "BaseBdev1", 00:14:49.552 "uuid": "76bc83f8-8b38-44dd-a13c-6eb4251101cd", 00:14:49.552 "is_configured": true, 00:14:49.552 "data_offset": 0, 00:14:49.552 "data_size": 65536 00:14:49.552 }, 00:14:49.552 { 00:14:49.552 "name": "BaseBdev2", 00:14:49.552 "uuid": "80bfa6f3-5060-4848-971c-146dfa389ab0", 00:14:49.552 "is_configured": true, 00:14:49.552 "data_offset": 0, 00:14:49.552 "data_size": 65536 00:14:49.552 } 00:14:49.552 ] 00:14:49.552 }' 00:14:49.552 15:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.552 15:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.811 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.811 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:49.811 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:49.811 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:49.811 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:49.811 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:49.811 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:49.811 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:50.070 [2024-07-23 15:08:45.277022] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.070 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:50.070 "name": "Existed_Raid", 00:14:50.070 "aliases": [ 00:14:50.070 "397320a5-9d86-4669-a525-49aedf11df0b" 00:14:50.070 ], 00:14:50.070 "product_name": "Raid Volume", 00:14:50.070 "block_size": 512, 00:14:50.070 "num_blocks": 131072, 00:14:50.070 "uuid": "397320a5-9d86-4669-a525-49aedf11df0b", 00:14:50.070 "assigned_rate_limits": { 00:14:50.070 "rw_ios_per_sec": 0, 00:14:50.070 "rw_mbytes_per_sec": 0, 00:14:50.070 "r_mbytes_per_sec": 0, 00:14:50.070 "w_mbytes_per_sec": 0 00:14:50.070 }, 00:14:50.070 "claimed": false, 00:14:50.070 "zoned": false, 00:14:50.070 "supported_io_types": { 00:14:50.070 "read": true, 00:14:50.070 "write": true, 00:14:50.070 "unmap": true, 00:14:50.070 "flush": true, 00:14:50.070 "reset": true, 00:14:50.070 "nvme_admin": false, 00:14:50.070 "nvme_io": false, 00:14:50.070 "nvme_io_md": false, 00:14:50.070 "write_zeroes": true, 00:14:50.070 "zcopy": false, 00:14:50.070 "get_zone_info": false, 00:14:50.070 "zone_management": false, 00:14:50.070 "zone_append": false, 00:14:50.070 "compare": false, 00:14:50.070 "compare_and_write": false, 00:14:50.070 "abort": false, 00:14:50.070 "seek_hole": false, 00:14:50.070 "seek_data": false, 00:14:50.070 "copy": false, 00:14:50.070 "nvme_iov_md": false 00:14:50.070 }, 00:14:50.070 "memory_domains": [ 00:14:50.070 { 00:14:50.070 "dma_device_id": "system", 00:14:50.070 "dma_device_type": 1 00:14:50.070 }, 00:14:50.070 { 00:14:50.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.070 "dma_device_type": 2 00:14:50.070 }, 00:14:50.070 { 00:14:50.070 "dma_device_id": "system", 00:14:50.070 "dma_device_type": 1 00:14:50.070 }, 00:14:50.070 { 00:14:50.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.070 "dma_device_type": 2 00:14:50.070 } 00:14:50.070 ], 00:14:50.070 "driver_specific": { 00:14:50.070 "raid": { 00:14:50.070 "uuid": "397320a5-9d86-4669-a525-49aedf11df0b", 00:14:50.070 "strip_size_kb": 64, 00:14:50.070 "state": "online", 00:14:50.070 "raid_level": "raid0", 00:14:50.070 "superblock": false, 00:14:50.070 "num_base_bdevs": 2, 00:14:50.070 "num_base_bdevs_discovered": 2, 00:14:50.070 "num_base_bdevs_operational": 2, 00:14:50.070 "base_bdevs_list": [ 00:14:50.070 { 00:14:50.070 "name": "BaseBdev1", 00:14:50.070 "uuid": "76bc83f8-8b38-44dd-a13c-6eb4251101cd", 00:14:50.070 "is_configured": true, 00:14:50.070 "data_offset": 0, 00:14:50.070 "data_size": 65536 00:14:50.070 }, 00:14:50.070 { 00:14:50.070 "name": "BaseBdev2", 00:14:50.070 "uuid": "80bfa6f3-5060-4848-971c-146dfa389ab0", 00:14:50.070 "is_configured": true, 00:14:50.070 "data_offset": 0, 00:14:50.070 "data_size": 65536 00:14:50.070 } 00:14:50.070 ] 00:14:50.070 } 00:14:50.070 } 00:14:50.070 }' 00:14:50.070 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.070 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:50.070 BaseBdev2' 00:14:50.070 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.070 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.070 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.330 "name": "BaseBdev1", 00:14:50.330 "aliases": [ 00:14:50.330 "76bc83f8-8b38-44dd-a13c-6eb4251101cd" 00:14:50.330 ], 00:14:50.330 "product_name": "Malloc disk", 00:14:50.330 "block_size": 512, 00:14:50.330 "num_blocks": 65536, 00:14:50.330 "uuid": "76bc83f8-8b38-44dd-a13c-6eb4251101cd", 00:14:50.330 "assigned_rate_limits": { 00:14:50.330 "rw_ios_per_sec": 0, 00:14:50.330 "rw_mbytes_per_sec": 0, 00:14:50.330 "r_mbytes_per_sec": 0, 00:14:50.330 "w_mbytes_per_sec": 0 00:14:50.330 }, 00:14:50.330 "claimed": true, 00:14:50.330 "claim_type": "exclusive_write", 00:14:50.330 "zoned": false, 00:14:50.330 "supported_io_types": { 00:14:50.330 "read": true, 00:14:50.330 "write": true, 00:14:50.330 "unmap": true, 00:14:50.330 "flush": true, 00:14:50.330 "reset": true, 00:14:50.330 "nvme_admin": false, 00:14:50.330 "nvme_io": false, 00:14:50.330 "nvme_io_md": false, 00:14:50.330 "write_zeroes": true, 00:14:50.330 "zcopy": true, 00:14:50.330 "get_zone_info": false, 00:14:50.330 "zone_management": false, 00:14:50.330 "zone_append": false, 00:14:50.330 "compare": false, 00:14:50.330 "compare_and_write": false, 00:14:50.330 "abort": true, 00:14:50.330 "seek_hole": false, 00:14:50.330 "seek_data": false, 00:14:50.330 "copy": true, 00:14:50.330 "nvme_iov_md": false 00:14:50.330 }, 00:14:50.330 "memory_domains": [ 00:14:50.330 { 00:14:50.330 "dma_device_id": "system", 00:14:50.330 "dma_device_type": 1 00:14:50.330 }, 00:14:50.330 { 00:14:50.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.330 "dma_device_type": 2 00:14:50.330 } 00:14:50.330 ], 00:14:50.330 "driver_specific": {} 00:14:50.330 }' 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:50.330 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.589 "name": "BaseBdev2", 00:14:50.589 "aliases": [ 00:14:50.589 "80bfa6f3-5060-4848-971c-146dfa389ab0" 00:14:50.589 ], 00:14:50.589 "product_name": "Malloc disk", 00:14:50.589 "block_size": 512, 00:14:50.589 "num_blocks": 65536, 00:14:50.589 "uuid": "80bfa6f3-5060-4848-971c-146dfa389ab0", 00:14:50.589 "assigned_rate_limits": { 00:14:50.589 "rw_ios_per_sec": 0, 00:14:50.589 "rw_mbytes_per_sec": 0, 00:14:50.589 "r_mbytes_per_sec": 0, 00:14:50.589 "w_mbytes_per_sec": 0 00:14:50.589 }, 00:14:50.589 "claimed": true, 00:14:50.589 "claim_type": "exclusive_write", 00:14:50.589 "zoned": false, 00:14:50.589 "supported_io_types": { 00:14:50.589 "read": true, 00:14:50.589 "write": true, 00:14:50.589 "unmap": true, 00:14:50.589 "flush": true, 00:14:50.589 "reset": true, 00:14:50.589 "nvme_admin": false, 00:14:50.589 "nvme_io": false, 00:14:50.589 "nvme_io_md": false, 00:14:50.589 "write_zeroes": true, 00:14:50.589 "zcopy": true, 00:14:50.589 "get_zone_info": false, 00:14:50.589 "zone_management": false, 00:14:50.589 "zone_append": false, 00:14:50.589 "compare": false, 00:14:50.589 "compare_and_write": false, 00:14:50.589 "abort": true, 00:14:50.589 "seek_hole": false, 00:14:50.589 "seek_data": false, 00:14:50.589 "copy": true, 00:14:50.589 "nvme_iov_md": false 00:14:50.589 }, 00:14:50.589 "memory_domains": [ 00:14:50.589 { 00:14:50.589 "dma_device_id": "system", 00:14:50.589 "dma_device_type": 1 00:14:50.589 }, 00:14:50.589 { 00:14:50.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.589 "dma_device_type": 2 00:14:50.589 } 00:14:50.589 ], 00:14:50.589 "driver_specific": {} 00:14:50.589 }' 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.589 15:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.848 [2024-07-23 15:08:46.097020] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.848 [2024-07-23 15:08:46.097065] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.848 [2024-07-23 15:08:46.097120] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.848 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.107 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.107 "name": "Existed_Raid", 00:14:51.107 "uuid": "397320a5-9d86-4669-a525-49aedf11df0b", 00:14:51.107 "strip_size_kb": 64, 00:14:51.107 "state": "offline", 00:14:51.107 "raid_level": "raid0", 00:14:51.107 "superblock": false, 00:14:51.107 "num_base_bdevs": 2, 00:14:51.107 "num_base_bdevs_discovered": 1, 00:14:51.107 "num_base_bdevs_operational": 1, 00:14:51.107 "base_bdevs_list": [ 00:14:51.107 { 00:14:51.107 "name": null, 00:14:51.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.107 "is_configured": false, 00:14:51.107 "data_offset": 0, 00:14:51.107 "data_size": 65536 00:14:51.107 }, 00:14:51.107 { 00:14:51.107 "name": "BaseBdev2", 00:14:51.107 "uuid": "80bfa6f3-5060-4848-971c-146dfa389ab0", 00:14:51.107 "is_configured": true, 00:14:51.107 "data_offset": 0, 00:14:51.107 "data_size": 65536 00:14:51.107 } 00:14:51.107 ] 00:14:51.107 }' 00:14:51.107 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.107 15:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.365 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:51.365 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:51.365 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.365 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:51.623 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:51.623 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.623 15:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:51.882 [2024-07-23 15:08:47.109723] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.882 [2024-07-23 15:08:47.109814] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:14:51.882 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:51.882 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:51.882 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.882 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 87337 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 87337 ']' 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 87337 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87337 00:14:52.140 killing process with pid 87337 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87337' 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 87337 00:14:52.140 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 87337 00:14:52.140 [2024-07-23 15:08:47.442810] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.140 [2024-07-23 15:08:47.442886] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:52.398 00:14:52.398 real 0m7.905s 00:14:52.398 user 0m13.270s 00:14:52.398 sys 0m1.690s 00:14:52.398 ************************************ 00:14:52.398 END TEST raid_state_function_test 00:14:52.398 ************************************ 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.398 15:08:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:52.398 15:08:47 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:52.398 15:08:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:52.398 15:08:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.398 15:08:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:52.398 ************************************ 00:14:52.398 START TEST raid_state_function_test_sb 00:14:52.398 ************************************ 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:52.398 Process raid pid: 87661 00:14:52.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=87661 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 87661' 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 87661 /var/tmp/spdk-raid.sock 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 87661 ']' 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:52.398 15:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.398 [2024-07-23 15:08:47.823897] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:14:52.398 [2024-07-23 15:08:47.825258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.656 [2024-07-23 15:08:47.979472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.656 [2024-07-23 15:08:48.023008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.656 [2024-07-23 15:08:48.067843] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.592 15:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.592 15:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:53.592 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:53.592 [2024-07-23 15:08:48.841997] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.593 [2024-07-23 15:08:48.842062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.593 [2024-07-23 15:08:48.842075] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.593 [2024-07-23 15:08:48.842089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.593 15:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.851 15:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.851 "name": "Existed_Raid", 00:14:53.851 "uuid": "fef39235-78b2-4f6c-a1b2-8631fac3675b", 00:14:53.851 "strip_size_kb": 64, 00:14:53.851 "state": "configuring", 00:14:53.851 "raid_level": "raid0", 00:14:53.851 "superblock": true, 00:14:53.851 "num_base_bdevs": 2, 00:14:53.851 "num_base_bdevs_discovered": 0, 00:14:53.851 "num_base_bdevs_operational": 2, 00:14:53.851 "base_bdevs_list": [ 00:14:53.851 { 00:14:53.851 "name": "BaseBdev1", 00:14:53.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.851 "is_configured": false, 00:14:53.851 "data_offset": 0, 00:14:53.851 "data_size": 0 00:14:53.851 }, 00:14:53.851 { 00:14:53.851 "name": "BaseBdev2", 00:14:53.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.851 "is_configured": false, 00:14:53.851 "data_offset": 0, 00:14:53.851 "data_size": 0 00:14:53.851 } 00:14:53.851 ] 00:14:53.851 }' 00:14:53.851 15:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.851 15:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.110 15:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:54.368 [2024-07-23 15:08:49.638024] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.368 [2024-07-23 15:08:49.638225] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:14:54.368 15:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:54.627 [2024-07-23 15:08:49.818120] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.627 [2024-07-23 15:08:49.818187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.627 [2024-07-23 15:08:49.818198] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.627 [2024-07-23 15:08:49.818212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.627 15:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.885 [2024-07-23 15:08:50.059847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.885 BaseBdev1 00:14:54.885 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:54.885 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:54.886 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:54.886 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:54.886 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:54.886 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:54.886 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.886 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.145 [ 00:14:55.145 { 00:14:55.145 "name": "BaseBdev1", 00:14:55.145 "aliases": [ 00:14:55.145 "a9fe0202-8ce6-4c27-be00-9ced4d01a753" 00:14:55.145 ], 00:14:55.145 "product_name": "Malloc disk", 00:14:55.145 "block_size": 512, 00:14:55.145 "num_blocks": 65536, 00:14:55.145 "uuid": "a9fe0202-8ce6-4c27-be00-9ced4d01a753", 00:14:55.145 "assigned_rate_limits": { 00:14:55.145 "rw_ios_per_sec": 0, 00:14:55.145 "rw_mbytes_per_sec": 0, 00:14:55.145 "r_mbytes_per_sec": 0, 00:14:55.145 "w_mbytes_per_sec": 0 00:14:55.145 }, 00:14:55.145 "claimed": true, 00:14:55.145 "claim_type": "exclusive_write", 00:14:55.145 "zoned": false, 00:14:55.145 "supported_io_types": { 00:14:55.145 "read": true, 00:14:55.145 "write": true, 00:14:55.145 "unmap": true, 00:14:55.145 "flush": true, 00:14:55.145 "reset": true, 00:14:55.145 "nvme_admin": false, 00:14:55.145 "nvme_io": false, 00:14:55.145 "nvme_io_md": false, 00:14:55.145 "write_zeroes": true, 00:14:55.145 "zcopy": true, 00:14:55.145 "get_zone_info": false, 00:14:55.145 "zone_management": false, 00:14:55.145 "zone_append": false, 00:14:55.145 "compare": false, 00:14:55.145 "compare_and_write": false, 00:14:55.145 "abort": true, 00:14:55.145 "seek_hole": false, 00:14:55.145 "seek_data": false, 00:14:55.145 "copy": true, 00:14:55.145 "nvme_iov_md": false 00:14:55.145 }, 00:14:55.145 "memory_domains": [ 00:14:55.145 { 00:14:55.145 "dma_device_id": "system", 00:14:55.145 "dma_device_type": 1 00:14:55.145 }, 00:14:55.145 { 00:14:55.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.145 "dma_device_type": 2 00:14:55.145 } 00:14:55.145 ], 00:14:55.145 "driver_specific": {} 00:14:55.145 } 00:14:55.145 ] 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.145 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.404 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.404 "name": "Existed_Raid", 00:14:55.404 "uuid": "43589a8a-ddbe-48e5-af9f-08c7e4a97322", 00:14:55.404 "strip_size_kb": 64, 00:14:55.404 "state": "configuring", 00:14:55.404 "raid_level": "raid0", 00:14:55.404 "superblock": true, 00:14:55.404 "num_base_bdevs": 2, 00:14:55.404 "num_base_bdevs_discovered": 1, 00:14:55.404 "num_base_bdevs_operational": 2, 00:14:55.404 "base_bdevs_list": [ 00:14:55.404 { 00:14:55.404 "name": "BaseBdev1", 00:14:55.404 "uuid": "a9fe0202-8ce6-4c27-be00-9ced4d01a753", 00:14:55.404 "is_configured": true, 00:14:55.404 "data_offset": 2048, 00:14:55.404 "data_size": 63488 00:14:55.404 }, 00:14:55.404 { 00:14:55.404 "name": "BaseBdev2", 00:14:55.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.404 "is_configured": false, 00:14:55.404 "data_offset": 0, 00:14:55.404 "data_size": 0 00:14:55.404 } 00:14:55.404 ] 00:14:55.404 }' 00:14:55.404 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.404 15:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.662 15:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:55.662 [2024-07-23 15:08:51.084149] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.662 [2024-07-23 15:08:51.084404] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:55.921 [2024-07-23 15:08:51.252270] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.921 [2024-07-23 15:08:51.254706] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.921 [2024-07-23 15:08:51.254883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.921 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.922 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.922 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.180 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.180 "name": "Existed_Raid", 00:14:56.180 "uuid": "79282a9b-3a44-4153-b7ff-2d856d98e1e3", 00:14:56.180 "strip_size_kb": 64, 00:14:56.180 "state": "configuring", 00:14:56.180 "raid_level": "raid0", 00:14:56.180 "superblock": true, 00:14:56.180 "num_base_bdevs": 2, 00:14:56.180 "num_base_bdevs_discovered": 1, 00:14:56.180 "num_base_bdevs_operational": 2, 00:14:56.180 "base_bdevs_list": [ 00:14:56.180 { 00:14:56.180 "name": "BaseBdev1", 00:14:56.180 "uuid": "a9fe0202-8ce6-4c27-be00-9ced4d01a753", 00:14:56.180 "is_configured": true, 00:14:56.180 "data_offset": 2048, 00:14:56.180 "data_size": 63488 00:14:56.180 }, 00:14:56.180 { 00:14:56.180 "name": "BaseBdev2", 00:14:56.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.180 "is_configured": false, 00:14:56.180 "data_offset": 0, 00:14:56.180 "data_size": 0 00:14:56.180 } 00:14:56.180 ] 00:14:56.180 }' 00:14:56.180 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.180 15:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.748 15:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.748 [2024-07-23 15:08:52.101670] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.748 [2024-07-23 15:08:52.102253] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:14:56.748 [2024-07-23 15:08:52.102439] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:56.748 [2024-07-23 15:08:52.102660] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:14:56.748 BaseBdev2 00:14:56.748 [2024-07-23 15:08:52.103225] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:14:56.748 [2024-07-23 15:08:52.103250] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:14:56.748 [2024-07-23 15:08:52.103419] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.748 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:56.748 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:56.748 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:56.748 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:56.748 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:56.748 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:56.748 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.007 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.266 [ 00:14:57.266 { 00:14:57.266 "name": "BaseBdev2", 00:14:57.266 "aliases": [ 00:14:57.266 "dd13f362-ae24-4ca3-8f23-6be14949fde3" 00:14:57.266 ], 00:14:57.266 "product_name": "Malloc disk", 00:14:57.266 "block_size": 512, 00:14:57.266 "num_blocks": 65536, 00:14:57.266 "uuid": "dd13f362-ae24-4ca3-8f23-6be14949fde3", 00:14:57.266 "assigned_rate_limits": { 00:14:57.266 "rw_ios_per_sec": 0, 00:14:57.266 "rw_mbytes_per_sec": 0, 00:14:57.266 "r_mbytes_per_sec": 0, 00:14:57.266 "w_mbytes_per_sec": 0 00:14:57.266 }, 00:14:57.266 "claimed": true, 00:14:57.266 "claim_type": "exclusive_write", 00:14:57.266 "zoned": false, 00:14:57.266 "supported_io_types": { 00:14:57.266 "read": true, 00:14:57.266 "write": true, 00:14:57.266 "unmap": true, 00:14:57.266 "flush": true, 00:14:57.266 "reset": true, 00:14:57.266 "nvme_admin": false, 00:14:57.266 "nvme_io": false, 00:14:57.266 "nvme_io_md": false, 00:14:57.266 "write_zeroes": true, 00:14:57.266 "zcopy": true, 00:14:57.266 "get_zone_info": false, 00:14:57.266 "zone_management": false, 00:14:57.266 "zone_append": false, 00:14:57.266 "compare": false, 00:14:57.266 "compare_and_write": false, 00:14:57.266 "abort": true, 00:14:57.266 "seek_hole": false, 00:14:57.266 "seek_data": false, 00:14:57.266 "copy": true, 00:14:57.266 "nvme_iov_md": false 00:14:57.266 }, 00:14:57.266 "memory_domains": [ 00:14:57.266 { 00:14:57.266 "dma_device_id": "system", 00:14:57.266 "dma_device_type": 1 00:14:57.266 }, 00:14:57.266 { 00:14:57.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.266 "dma_device_type": 2 00:14:57.266 } 00:14:57.266 ], 00:14:57.266 "driver_specific": {} 00:14:57.266 } 00:14:57.266 ] 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.266 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.525 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.525 "name": "Existed_Raid", 00:14:57.525 "uuid": "79282a9b-3a44-4153-b7ff-2d856d98e1e3", 00:14:57.525 "strip_size_kb": 64, 00:14:57.525 "state": "online", 00:14:57.525 "raid_level": "raid0", 00:14:57.525 "superblock": true, 00:14:57.525 "num_base_bdevs": 2, 00:14:57.525 "num_base_bdevs_discovered": 2, 00:14:57.525 "num_base_bdevs_operational": 2, 00:14:57.525 "base_bdevs_list": [ 00:14:57.525 { 00:14:57.525 "name": "BaseBdev1", 00:14:57.525 "uuid": "a9fe0202-8ce6-4c27-be00-9ced4d01a753", 00:14:57.525 "is_configured": true, 00:14:57.525 "data_offset": 2048, 00:14:57.525 "data_size": 63488 00:14:57.525 }, 00:14:57.525 { 00:14:57.525 "name": "BaseBdev2", 00:14:57.525 "uuid": "dd13f362-ae24-4ca3-8f23-6be14949fde3", 00:14:57.525 "is_configured": true, 00:14:57.525 "data_offset": 2048, 00:14:57.525 "data_size": 63488 00:14:57.525 } 00:14:57.525 ] 00:14:57.525 }' 00:14:57.525 15:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.525 15:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.784 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.784 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:57.784 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:57.784 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:57.784 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:57.784 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:57.784 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:57.784 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:58.045 [2024-07-23 15:08:53.386329] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.045 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:58.045 "name": "Existed_Raid", 00:14:58.045 "aliases": [ 00:14:58.045 "79282a9b-3a44-4153-b7ff-2d856d98e1e3" 00:14:58.045 ], 00:14:58.045 "product_name": "Raid Volume", 00:14:58.045 "block_size": 512, 00:14:58.045 "num_blocks": 126976, 00:14:58.045 "uuid": "79282a9b-3a44-4153-b7ff-2d856d98e1e3", 00:14:58.045 "assigned_rate_limits": { 00:14:58.045 "rw_ios_per_sec": 0, 00:14:58.045 "rw_mbytes_per_sec": 0, 00:14:58.045 "r_mbytes_per_sec": 0, 00:14:58.045 "w_mbytes_per_sec": 0 00:14:58.045 }, 00:14:58.045 "claimed": false, 00:14:58.045 "zoned": false, 00:14:58.045 "supported_io_types": { 00:14:58.045 "read": true, 00:14:58.045 "write": true, 00:14:58.045 "unmap": true, 00:14:58.045 "flush": true, 00:14:58.045 "reset": true, 00:14:58.045 "nvme_admin": false, 00:14:58.045 "nvme_io": false, 00:14:58.045 "nvme_io_md": false, 00:14:58.045 "write_zeroes": true, 00:14:58.045 "zcopy": false, 00:14:58.045 "get_zone_info": false, 00:14:58.045 "zone_management": false, 00:14:58.045 "zone_append": false, 00:14:58.045 "compare": false, 00:14:58.045 "compare_and_write": false, 00:14:58.045 "abort": false, 00:14:58.045 "seek_hole": false, 00:14:58.045 "seek_data": false, 00:14:58.045 "copy": false, 00:14:58.045 "nvme_iov_md": false 00:14:58.045 }, 00:14:58.045 "memory_domains": [ 00:14:58.045 { 00:14:58.045 "dma_device_id": "system", 00:14:58.045 "dma_device_type": 1 00:14:58.045 }, 00:14:58.045 { 00:14:58.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.046 "dma_device_type": 2 00:14:58.046 }, 00:14:58.046 { 00:14:58.046 "dma_device_id": "system", 00:14:58.046 "dma_device_type": 1 00:14:58.046 }, 00:14:58.046 { 00:14:58.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.046 "dma_device_type": 2 00:14:58.046 } 00:14:58.046 ], 00:14:58.046 "driver_specific": { 00:14:58.046 "raid": { 00:14:58.046 "uuid": "79282a9b-3a44-4153-b7ff-2d856d98e1e3", 00:14:58.046 "strip_size_kb": 64, 00:14:58.046 "state": "online", 00:14:58.046 "raid_level": "raid0", 00:14:58.046 "superblock": true, 00:14:58.046 "num_base_bdevs": 2, 00:14:58.046 "num_base_bdevs_discovered": 2, 00:14:58.046 "num_base_bdevs_operational": 2, 00:14:58.046 "base_bdevs_list": [ 00:14:58.046 { 00:14:58.046 "name": "BaseBdev1", 00:14:58.046 "uuid": "a9fe0202-8ce6-4c27-be00-9ced4d01a753", 00:14:58.046 "is_configured": true, 00:14:58.046 "data_offset": 2048, 00:14:58.046 "data_size": 63488 00:14:58.046 }, 00:14:58.046 { 00:14:58.046 "name": "BaseBdev2", 00:14:58.046 "uuid": "dd13f362-ae24-4ca3-8f23-6be14949fde3", 00:14:58.046 "is_configured": true, 00:14:58.046 "data_offset": 2048, 00:14:58.046 "data_size": 63488 00:14:58.046 } 00:14:58.046 ] 00:14:58.046 } 00:14:58.046 } 00:14:58.046 }' 00:14:58.046 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.046 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:58.046 BaseBdev2' 00:14:58.046 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:58.046 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:58.046 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:58.304 "name": "BaseBdev1", 00:14:58.304 "aliases": [ 00:14:58.304 "a9fe0202-8ce6-4c27-be00-9ced4d01a753" 00:14:58.304 ], 00:14:58.304 "product_name": "Malloc disk", 00:14:58.304 "block_size": 512, 00:14:58.304 "num_blocks": 65536, 00:14:58.304 "uuid": "a9fe0202-8ce6-4c27-be00-9ced4d01a753", 00:14:58.304 "assigned_rate_limits": { 00:14:58.304 "rw_ios_per_sec": 0, 00:14:58.304 "rw_mbytes_per_sec": 0, 00:14:58.304 "r_mbytes_per_sec": 0, 00:14:58.304 "w_mbytes_per_sec": 0 00:14:58.304 }, 00:14:58.304 "claimed": true, 00:14:58.304 "claim_type": "exclusive_write", 00:14:58.304 "zoned": false, 00:14:58.304 "supported_io_types": { 00:14:58.304 "read": true, 00:14:58.304 "write": true, 00:14:58.304 "unmap": true, 00:14:58.304 "flush": true, 00:14:58.304 "reset": true, 00:14:58.304 "nvme_admin": false, 00:14:58.304 "nvme_io": false, 00:14:58.304 "nvme_io_md": false, 00:14:58.304 "write_zeroes": true, 00:14:58.304 "zcopy": true, 00:14:58.304 "get_zone_info": false, 00:14:58.304 "zone_management": false, 00:14:58.304 "zone_append": false, 00:14:58.304 "compare": false, 00:14:58.304 "compare_and_write": false, 00:14:58.304 "abort": true, 00:14:58.304 "seek_hole": false, 00:14:58.304 "seek_data": false, 00:14:58.304 "copy": true, 00:14:58.304 "nvme_iov_md": false 00:14:58.304 }, 00:14:58.304 "memory_domains": [ 00:14:58.304 { 00:14:58.304 "dma_device_id": "system", 00:14:58.304 "dma_device_type": 1 00:14:58.304 }, 00:14:58.304 { 00:14:58.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.304 "dma_device_type": 2 00:14:58.304 } 00:14:58.304 ], 00:14:58.304 "driver_specific": {} 00:14:58.304 }' 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:58.304 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:58.563 "name": "BaseBdev2", 00:14:58.563 "aliases": [ 00:14:58.563 "dd13f362-ae24-4ca3-8f23-6be14949fde3" 00:14:58.563 ], 00:14:58.563 "product_name": "Malloc disk", 00:14:58.563 "block_size": 512, 00:14:58.563 "num_blocks": 65536, 00:14:58.563 "uuid": "dd13f362-ae24-4ca3-8f23-6be14949fde3", 00:14:58.563 "assigned_rate_limits": { 00:14:58.563 "rw_ios_per_sec": 0, 00:14:58.563 "rw_mbytes_per_sec": 0, 00:14:58.563 "r_mbytes_per_sec": 0, 00:14:58.563 "w_mbytes_per_sec": 0 00:14:58.563 }, 00:14:58.563 "claimed": true, 00:14:58.563 "claim_type": "exclusive_write", 00:14:58.563 "zoned": false, 00:14:58.563 "supported_io_types": { 00:14:58.563 "read": true, 00:14:58.563 "write": true, 00:14:58.563 "unmap": true, 00:14:58.563 "flush": true, 00:14:58.563 "reset": true, 00:14:58.563 "nvme_admin": false, 00:14:58.563 "nvme_io": false, 00:14:58.563 "nvme_io_md": false, 00:14:58.563 "write_zeroes": true, 00:14:58.563 "zcopy": true, 00:14:58.563 "get_zone_info": false, 00:14:58.563 "zone_management": false, 00:14:58.563 "zone_append": false, 00:14:58.563 "compare": false, 00:14:58.563 "compare_and_write": false, 00:14:58.563 "abort": true, 00:14:58.563 "seek_hole": false, 00:14:58.563 "seek_data": false, 00:14:58.563 "copy": true, 00:14:58.563 "nvme_iov_md": false 00:14:58.563 }, 00:14:58.563 "memory_domains": [ 00:14:58.563 { 00:14:58.563 "dma_device_id": "system", 00:14:58.563 "dma_device_type": 1 00:14:58.563 }, 00:14:58.563 { 00:14:58.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.563 "dma_device_type": 2 00:14:58.563 } 00:14:58.563 ], 00:14:58.563 "driver_specific": {} 00:14:58.563 }' 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.563 15:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:58.822 [2024-07-23 15:08:54.138306] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.822 [2024-07-23 15:08:54.138347] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.822 [2024-07-23 15:08:54.138426] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.822 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.823 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.823 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.081 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.081 "name": "Existed_Raid", 00:14:59.081 "uuid": "79282a9b-3a44-4153-b7ff-2d856d98e1e3", 00:14:59.081 "strip_size_kb": 64, 00:14:59.081 "state": "offline", 00:14:59.081 "raid_level": "raid0", 00:14:59.081 "superblock": true, 00:14:59.081 "num_base_bdevs": 2, 00:14:59.081 "num_base_bdevs_discovered": 1, 00:14:59.081 "num_base_bdevs_operational": 1, 00:14:59.081 "base_bdevs_list": [ 00:14:59.081 { 00:14:59.081 "name": null, 00:14:59.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.081 "is_configured": false, 00:14:59.081 "data_offset": 2048, 00:14:59.081 "data_size": 63488 00:14:59.081 }, 00:14:59.081 { 00:14:59.081 "name": "BaseBdev2", 00:14:59.081 "uuid": "dd13f362-ae24-4ca3-8f23-6be14949fde3", 00:14:59.081 "is_configured": true, 00:14:59.081 "data_offset": 2048, 00:14:59.081 "data_size": 63488 00:14:59.081 } 00:14:59.081 ] 00:14:59.081 }' 00:14:59.081 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.081 15:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.340 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:59.340 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:59.340 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:59.340 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.599 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:59.599 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.599 15:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:59.859 [2024-07-23 15:08:55.202893] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.859 [2024-07-23 15:08:55.202960] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:14:59.859 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:59.859 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:59.859 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.859 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 87661 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 87661 ']' 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 87661 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87661 00:15:00.118 killing process with pid 87661 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87661' 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 87661 00:15:00.118 [2024-07-23 15:08:55.531337] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.118 [2024-07-23 15:08:55.531410] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.118 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 87661 00:15:00.377 15:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:00.377 00:15:00.377 real 0m8.021s 00:15:00.377 user 0m13.467s 00:15:00.377 sys 0m1.724s 00:15:00.377 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.377 ************************************ 00:15:00.377 END TEST raid_state_function_test_sb 00:15:00.377 ************************************ 00:15:00.377 15:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.636 15:08:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:00.636 15:08:55 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:00.636 15:08:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:00.636 15:08:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.636 15:08:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.636 ************************************ 00:15:00.636 START TEST raid_superblock_test 00:15:00.636 ************************************ 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=87984 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 87984 /var/tmp/spdk-raid.sock 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 87984 ']' 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:00.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.636 15:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.636 [2024-07-23 15:08:55.910188] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:00.636 [2024-07-23 15:08:55.910387] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87984 ] 00:15:00.636 [2024-07-23 15:08:56.062741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.928 [2024-07-23 15:08:56.110070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.928 [2024-07-23 15:08:56.155732] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.494 15:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:01.752 malloc1 00:15:01.752 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.011 [2024-07-23 15:08:57.247073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.011 [2024-07-23 15:08:57.247314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.011 [2024-07-23 15:08:57.247425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:15:02.011 [2024-07-23 15:08:57.247531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.011 [2024-07-23 15:08:57.250192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.011 [2024-07-23 15:08:57.250348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.011 pt1 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.011 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:02.011 malloc2 00:15:02.270 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.270 [2024-07-23 15:08:57.668674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.270 [2024-07-23 15:08:57.668956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.270 [2024-07-23 15:08:57.669088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:15:02.270 [2024-07-23 15:08:57.669187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.270 [2024-07-23 15:08:57.671903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.270 [2024-07-23 15:08:57.672070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.270 pt2 00:15:02.270 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:02.270 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:02.270 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:02.529 [2024-07-23 15:08:57.848831] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.529 [2024-07-23 15:08:57.851022] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.529 [2024-07-23 15:08:57.851216] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006c80 00:15:02.529 [2024-07-23 15:08:57.851234] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.529 [2024-07-23 15:08:57.851360] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:02.529 [2024-07-23 15:08:57.851688] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006c80 00:15:02.529 [2024-07-23 15:08:57.851706] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000006c80 00:15:02.529 [2024-07-23 15:08:57.851994] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.529 15:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.787 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.787 "name": "raid_bdev1", 00:15:02.787 "uuid": "a9409fa6-184a-4024-ad5f-7f67c229c4fc", 00:15:02.787 "strip_size_kb": 64, 00:15:02.787 "state": "online", 00:15:02.787 "raid_level": "raid0", 00:15:02.787 "superblock": true, 00:15:02.787 "num_base_bdevs": 2, 00:15:02.787 "num_base_bdevs_discovered": 2, 00:15:02.787 "num_base_bdevs_operational": 2, 00:15:02.787 "base_bdevs_list": [ 00:15:02.787 { 00:15:02.787 "name": "pt1", 00:15:02.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.787 "is_configured": true, 00:15:02.787 "data_offset": 2048, 00:15:02.787 "data_size": 63488 00:15:02.787 }, 00:15:02.787 { 00:15:02.787 "name": "pt2", 00:15:02.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.787 "is_configured": true, 00:15:02.787 "data_offset": 2048, 00:15:02.787 "data_size": 63488 00:15:02.787 } 00:15:02.787 ] 00:15:02.787 }' 00:15:02.787 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.787 15:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.045 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:03.045 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:03.045 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:03.045 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:03.045 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:03.045 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:03.045 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:03.045 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:03.304 [2024-07-23 15:08:58.569246] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.304 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:03.304 "name": "raid_bdev1", 00:15:03.304 "aliases": [ 00:15:03.304 "a9409fa6-184a-4024-ad5f-7f67c229c4fc" 00:15:03.304 ], 00:15:03.304 "product_name": "Raid Volume", 00:15:03.304 "block_size": 512, 00:15:03.304 "num_blocks": 126976, 00:15:03.304 "uuid": "a9409fa6-184a-4024-ad5f-7f67c229c4fc", 00:15:03.304 "assigned_rate_limits": { 00:15:03.304 "rw_ios_per_sec": 0, 00:15:03.304 "rw_mbytes_per_sec": 0, 00:15:03.304 "r_mbytes_per_sec": 0, 00:15:03.304 "w_mbytes_per_sec": 0 00:15:03.304 }, 00:15:03.304 "claimed": false, 00:15:03.304 "zoned": false, 00:15:03.304 "supported_io_types": { 00:15:03.304 "read": true, 00:15:03.304 "write": true, 00:15:03.304 "unmap": true, 00:15:03.304 "flush": true, 00:15:03.304 "reset": true, 00:15:03.304 "nvme_admin": false, 00:15:03.304 "nvme_io": false, 00:15:03.304 "nvme_io_md": false, 00:15:03.304 "write_zeroes": true, 00:15:03.304 "zcopy": false, 00:15:03.304 "get_zone_info": false, 00:15:03.304 "zone_management": false, 00:15:03.304 "zone_append": false, 00:15:03.304 "compare": false, 00:15:03.304 "compare_and_write": false, 00:15:03.304 "abort": false, 00:15:03.304 "seek_hole": false, 00:15:03.304 "seek_data": false, 00:15:03.304 "copy": false, 00:15:03.304 "nvme_iov_md": false 00:15:03.304 }, 00:15:03.304 "memory_domains": [ 00:15:03.304 { 00:15:03.304 "dma_device_id": "system", 00:15:03.304 "dma_device_type": 1 00:15:03.304 }, 00:15:03.304 { 00:15:03.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.304 "dma_device_type": 2 00:15:03.304 }, 00:15:03.304 { 00:15:03.304 "dma_device_id": "system", 00:15:03.304 "dma_device_type": 1 00:15:03.304 }, 00:15:03.304 { 00:15:03.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.304 "dma_device_type": 2 00:15:03.304 } 00:15:03.304 ], 00:15:03.304 "driver_specific": { 00:15:03.304 "raid": { 00:15:03.304 "uuid": "a9409fa6-184a-4024-ad5f-7f67c229c4fc", 00:15:03.304 "strip_size_kb": 64, 00:15:03.304 "state": "online", 00:15:03.304 "raid_level": "raid0", 00:15:03.304 "superblock": true, 00:15:03.304 "num_base_bdevs": 2, 00:15:03.304 "num_base_bdevs_discovered": 2, 00:15:03.304 "num_base_bdevs_operational": 2, 00:15:03.304 "base_bdevs_list": [ 00:15:03.304 { 00:15:03.304 "name": "pt1", 00:15:03.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.304 "is_configured": true, 00:15:03.304 "data_offset": 2048, 00:15:03.304 "data_size": 63488 00:15:03.304 }, 00:15:03.304 { 00:15:03.304 "name": "pt2", 00:15:03.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.304 "is_configured": true, 00:15:03.304 "data_offset": 2048, 00:15:03.304 "data_size": 63488 00:15:03.304 } 00:15:03.304 ] 00:15:03.304 } 00:15:03.304 } 00:15:03.304 }' 00:15:03.304 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.304 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:03.304 pt2' 00:15:03.304 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.305 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:03.305 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.564 "name": "pt1", 00:15:03.564 "aliases": [ 00:15:03.564 "00000000-0000-0000-0000-000000000001" 00:15:03.564 ], 00:15:03.564 "product_name": "passthru", 00:15:03.564 "block_size": 512, 00:15:03.564 "num_blocks": 65536, 00:15:03.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.564 "assigned_rate_limits": { 00:15:03.564 "rw_ios_per_sec": 0, 00:15:03.564 "rw_mbytes_per_sec": 0, 00:15:03.564 "r_mbytes_per_sec": 0, 00:15:03.564 "w_mbytes_per_sec": 0 00:15:03.564 }, 00:15:03.564 "claimed": true, 00:15:03.564 "claim_type": "exclusive_write", 00:15:03.564 "zoned": false, 00:15:03.564 "supported_io_types": { 00:15:03.564 "read": true, 00:15:03.564 "write": true, 00:15:03.564 "unmap": true, 00:15:03.564 "flush": true, 00:15:03.564 "reset": true, 00:15:03.564 "nvme_admin": false, 00:15:03.564 "nvme_io": false, 00:15:03.564 "nvme_io_md": false, 00:15:03.564 "write_zeroes": true, 00:15:03.564 "zcopy": true, 00:15:03.564 "get_zone_info": false, 00:15:03.564 "zone_management": false, 00:15:03.564 "zone_append": false, 00:15:03.564 "compare": false, 00:15:03.564 "compare_and_write": false, 00:15:03.564 "abort": true, 00:15:03.564 "seek_hole": false, 00:15:03.564 "seek_data": false, 00:15:03.564 "copy": true, 00:15:03.564 "nvme_iov_md": false 00:15:03.564 }, 00:15:03.564 "memory_domains": [ 00:15:03.564 { 00:15:03.564 "dma_device_id": "system", 00:15:03.564 "dma_device_type": 1 00:15:03.564 }, 00:15:03.564 { 00:15:03.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.564 "dma_device_type": 2 00:15:03.564 } 00:15:03.564 ], 00:15:03.564 "driver_specific": { 00:15:03.564 "passthru": { 00:15:03.564 "name": "pt1", 00:15:03.564 "base_bdev_name": "malloc1" 00:15:03.564 } 00:15:03.564 } 00:15:03.564 }' 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:03.564 15:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.822 "name": "pt2", 00:15:03.822 "aliases": [ 00:15:03.822 "00000000-0000-0000-0000-000000000002" 00:15:03.822 ], 00:15:03.822 "product_name": "passthru", 00:15:03.822 "block_size": 512, 00:15:03.822 "num_blocks": 65536, 00:15:03.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.822 "assigned_rate_limits": { 00:15:03.822 "rw_ios_per_sec": 0, 00:15:03.822 "rw_mbytes_per_sec": 0, 00:15:03.822 "r_mbytes_per_sec": 0, 00:15:03.822 "w_mbytes_per_sec": 0 00:15:03.822 }, 00:15:03.822 "claimed": true, 00:15:03.822 "claim_type": "exclusive_write", 00:15:03.822 "zoned": false, 00:15:03.822 "supported_io_types": { 00:15:03.822 "read": true, 00:15:03.822 "write": true, 00:15:03.822 "unmap": true, 00:15:03.822 "flush": true, 00:15:03.822 "reset": true, 00:15:03.822 "nvme_admin": false, 00:15:03.822 "nvme_io": false, 00:15:03.822 "nvme_io_md": false, 00:15:03.822 "write_zeroes": true, 00:15:03.822 "zcopy": true, 00:15:03.822 "get_zone_info": false, 00:15:03.822 "zone_management": false, 00:15:03.822 "zone_append": false, 00:15:03.822 "compare": false, 00:15:03.822 "compare_and_write": false, 00:15:03.822 "abort": true, 00:15:03.822 "seek_hole": false, 00:15:03.822 "seek_data": false, 00:15:03.822 "copy": true, 00:15:03.822 "nvme_iov_md": false 00:15:03.822 }, 00:15:03.822 "memory_domains": [ 00:15:03.822 { 00:15:03.822 "dma_device_id": "system", 00:15:03.822 "dma_device_type": 1 00:15:03.822 }, 00:15:03.822 { 00:15:03.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.822 "dma_device_type": 2 00:15:03.822 } 00:15:03.822 ], 00:15:03.822 "driver_specific": { 00:15:03.822 "passthru": { 00:15:03.822 "name": "pt2", 00:15:03.822 "base_bdev_name": "malloc2" 00:15:03.822 } 00:15:03.822 } 00:15:03.822 }' 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:03.822 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:04.079 [2024-07-23 15:08:59.401387] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.079 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a9409fa6-184a-4024-ad5f-7f67c229c4fc 00:15:04.079 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a9409fa6-184a-4024-ad5f-7f67c229c4fc ']' 00:15:04.079 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:04.337 [2024-07-23 15:08:59.653178] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.337 [2024-07-23 15:08:59.653227] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.337 [2024-07-23 15:08:59.653324] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.337 [2024-07-23 15:08:59.653384] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.337 [2024-07-23 15:08:59.653403] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006c80 name raid_bdev1, state offline 00:15:04.337 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.337 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:04.595 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:04.595 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:04.595 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.595 15:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:04.853 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.853 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:05.111 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:05.111 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:05.111 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:05.111 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:05.111 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:05.111 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:05.111 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.369 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.369 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.369 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.369 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.369 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.369 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.369 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:05.369 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:05.369 [2024-07-23 15:09:00.789463] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:05.369 [2024-07-23 15:09:00.791678] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:05.369 [2024-07-23 15:09:00.791756] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:05.369 [2024-07-23 15:09:00.791936] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:05.369 [2024-07-23 15:09:00.792123] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.369 [2024-07-23 15:09:00.792138] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state configuring 00:15:05.369 request: 00:15:05.369 { 00:15:05.369 "name": "raid_bdev1", 00:15:05.369 "raid_level": "raid0", 00:15:05.369 "base_bdevs": [ 00:15:05.369 "malloc1", 00:15:05.369 "malloc2" 00:15:05.369 ], 00:15:05.369 "strip_size_kb": 64, 00:15:05.369 "superblock": false, 00:15:05.369 "method": "bdev_raid_create", 00:15:05.369 "req_id": 1 00:15:05.369 } 00:15:05.369 Got JSON-RPC error response 00:15:05.369 response: 00:15:05.369 { 00:15:05.369 "code": -17, 00:15:05.369 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:05.369 } 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:05.627 15:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:05.885 [2024-07-23 15:09:01.149504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:05.885 [2024-07-23 15:09:01.149760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.885 [2024-07-23 15:09:01.149842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:15:05.885 [2024-07-23 15:09:01.149959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.885 [2024-07-23 15:09:01.152609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.885 [2024-07-23 15:09:01.152747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:05.885 [2024-07-23 15:09:01.152965] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:05.885 [2024-07-23 15:09:01.153113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:05.885 pt1 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.885 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.144 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.144 "name": "raid_bdev1", 00:15:06.144 "uuid": "a9409fa6-184a-4024-ad5f-7f67c229c4fc", 00:15:06.144 "strip_size_kb": 64, 00:15:06.144 "state": "configuring", 00:15:06.144 "raid_level": "raid0", 00:15:06.144 "superblock": true, 00:15:06.144 "num_base_bdevs": 2, 00:15:06.144 "num_base_bdevs_discovered": 1, 00:15:06.144 "num_base_bdevs_operational": 2, 00:15:06.144 "base_bdevs_list": [ 00:15:06.144 { 00:15:06.144 "name": "pt1", 00:15:06.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.144 "is_configured": true, 00:15:06.144 "data_offset": 2048, 00:15:06.144 "data_size": 63488 00:15:06.144 }, 00:15:06.144 { 00:15:06.144 "name": null, 00:15:06.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.144 "is_configured": false, 00:15:06.144 "data_offset": 2048, 00:15:06.144 "data_size": 63488 00:15:06.144 } 00:15:06.144 ] 00:15:06.144 }' 00:15:06.144 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.144 15:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.402 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:06.402 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:06.402 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:06.402 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:06.402 [2024-07-23 15:09:01.829630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:06.402 [2024-07-23 15:09:01.829703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.402 [2024-07-23 15:09:01.829733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:15:06.402 [2024-07-23 15:09:01.829761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.402 [2024-07-23 15:09:01.830250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.403 [2024-07-23 15:09:01.830278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:06.661 [2024-07-23 15:09:01.830356] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:06.661 [2024-07-23 15:09:01.830386] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.661 [2024-07-23 15:09:01.830503] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:15:06.661 [2024-07-23 15:09:01.830514] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:06.661 [2024-07-23 15:09:01.830599] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:15:06.661 [2024-07-23 15:09:01.830918] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:15:06.661 [2024-07-23 15:09:01.830948] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:15:06.661 [2024-07-23 15:09:01.831043] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.661 pt2 00:15:06.661 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:06.661 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:06.661 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:06.661 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:06.661 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:06.661 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:06.661 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.662 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:06.662 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.662 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.662 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.662 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.662 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.662 15:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.662 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.662 "name": "raid_bdev1", 00:15:06.662 "uuid": "a9409fa6-184a-4024-ad5f-7f67c229c4fc", 00:15:06.662 "strip_size_kb": 64, 00:15:06.662 "state": "online", 00:15:06.662 "raid_level": "raid0", 00:15:06.662 "superblock": true, 00:15:06.662 "num_base_bdevs": 2, 00:15:06.662 "num_base_bdevs_discovered": 2, 00:15:06.662 "num_base_bdevs_operational": 2, 00:15:06.662 "base_bdevs_list": [ 00:15:06.662 { 00:15:06.662 "name": "pt1", 00:15:06.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.662 "is_configured": true, 00:15:06.662 "data_offset": 2048, 00:15:06.662 "data_size": 63488 00:15:06.662 }, 00:15:06.662 { 00:15:06.662 "name": "pt2", 00:15:06.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.662 "is_configured": true, 00:15:06.662 "data_offset": 2048, 00:15:06.662 "data_size": 63488 00:15:06.662 } 00:15:06.662 ] 00:15:06.662 }' 00:15:06.662 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.662 15:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.920 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:06.920 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:06.920 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:06.920 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:06.920 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:06.920 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:06.920 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:06.920 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:07.179 [2024-07-23 15:09:02.450021] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.179 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:07.179 "name": "raid_bdev1", 00:15:07.179 "aliases": [ 00:15:07.179 "a9409fa6-184a-4024-ad5f-7f67c229c4fc" 00:15:07.179 ], 00:15:07.179 "product_name": "Raid Volume", 00:15:07.179 "block_size": 512, 00:15:07.179 "num_blocks": 126976, 00:15:07.179 "uuid": "a9409fa6-184a-4024-ad5f-7f67c229c4fc", 00:15:07.179 "assigned_rate_limits": { 00:15:07.179 "rw_ios_per_sec": 0, 00:15:07.179 "rw_mbytes_per_sec": 0, 00:15:07.179 "r_mbytes_per_sec": 0, 00:15:07.179 "w_mbytes_per_sec": 0 00:15:07.179 }, 00:15:07.179 "claimed": false, 00:15:07.179 "zoned": false, 00:15:07.179 "supported_io_types": { 00:15:07.179 "read": true, 00:15:07.179 "write": true, 00:15:07.179 "unmap": true, 00:15:07.179 "flush": true, 00:15:07.179 "reset": true, 00:15:07.179 "nvme_admin": false, 00:15:07.179 "nvme_io": false, 00:15:07.179 "nvme_io_md": false, 00:15:07.179 "write_zeroes": true, 00:15:07.179 "zcopy": false, 00:15:07.179 "get_zone_info": false, 00:15:07.179 "zone_management": false, 00:15:07.179 "zone_append": false, 00:15:07.179 "compare": false, 00:15:07.179 "compare_and_write": false, 00:15:07.179 "abort": false, 00:15:07.179 "seek_hole": false, 00:15:07.179 "seek_data": false, 00:15:07.179 "copy": false, 00:15:07.179 "nvme_iov_md": false 00:15:07.179 }, 00:15:07.179 "memory_domains": [ 00:15:07.179 { 00:15:07.179 "dma_device_id": "system", 00:15:07.179 "dma_device_type": 1 00:15:07.179 }, 00:15:07.179 { 00:15:07.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.179 "dma_device_type": 2 00:15:07.179 }, 00:15:07.179 { 00:15:07.179 "dma_device_id": "system", 00:15:07.179 "dma_device_type": 1 00:15:07.179 }, 00:15:07.179 { 00:15:07.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.179 "dma_device_type": 2 00:15:07.179 } 00:15:07.179 ], 00:15:07.179 "driver_specific": { 00:15:07.179 "raid": { 00:15:07.179 "uuid": "a9409fa6-184a-4024-ad5f-7f67c229c4fc", 00:15:07.179 "strip_size_kb": 64, 00:15:07.179 "state": "online", 00:15:07.179 "raid_level": "raid0", 00:15:07.179 "superblock": true, 00:15:07.179 "num_base_bdevs": 2, 00:15:07.179 "num_base_bdevs_discovered": 2, 00:15:07.180 "num_base_bdevs_operational": 2, 00:15:07.180 "base_bdevs_list": [ 00:15:07.180 { 00:15:07.180 "name": "pt1", 00:15:07.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.180 "is_configured": true, 00:15:07.180 "data_offset": 2048, 00:15:07.180 "data_size": 63488 00:15:07.180 }, 00:15:07.180 { 00:15:07.180 "name": "pt2", 00:15:07.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.180 "is_configured": true, 00:15:07.180 "data_offset": 2048, 00:15:07.180 "data_size": 63488 00:15:07.180 } 00:15:07.180 ] 00:15:07.180 } 00:15:07.180 } 00:15:07.180 }' 00:15:07.180 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.180 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:07.180 pt2' 00:15:07.180 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.180 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.180 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.439 "name": "pt1", 00:15:07.439 "aliases": [ 00:15:07.439 "00000000-0000-0000-0000-000000000001" 00:15:07.439 ], 00:15:07.439 "product_name": "passthru", 00:15:07.439 "block_size": 512, 00:15:07.439 "num_blocks": 65536, 00:15:07.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.439 "assigned_rate_limits": { 00:15:07.439 "rw_ios_per_sec": 0, 00:15:07.439 "rw_mbytes_per_sec": 0, 00:15:07.439 "r_mbytes_per_sec": 0, 00:15:07.439 "w_mbytes_per_sec": 0 00:15:07.439 }, 00:15:07.439 "claimed": true, 00:15:07.439 "claim_type": "exclusive_write", 00:15:07.439 "zoned": false, 00:15:07.439 "supported_io_types": { 00:15:07.439 "read": true, 00:15:07.439 "write": true, 00:15:07.439 "unmap": true, 00:15:07.439 "flush": true, 00:15:07.439 "reset": true, 00:15:07.439 "nvme_admin": false, 00:15:07.439 "nvme_io": false, 00:15:07.439 "nvme_io_md": false, 00:15:07.439 "write_zeroes": true, 00:15:07.439 "zcopy": true, 00:15:07.439 "get_zone_info": false, 00:15:07.439 "zone_management": false, 00:15:07.439 "zone_append": false, 00:15:07.439 "compare": false, 00:15:07.439 "compare_and_write": false, 00:15:07.439 "abort": true, 00:15:07.439 "seek_hole": false, 00:15:07.439 "seek_data": false, 00:15:07.439 "copy": true, 00:15:07.439 "nvme_iov_md": false 00:15:07.439 }, 00:15:07.439 "memory_domains": [ 00:15:07.439 { 00:15:07.439 "dma_device_id": "system", 00:15:07.439 "dma_device_type": 1 00:15:07.439 }, 00:15:07.439 { 00:15:07.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.439 "dma_device_type": 2 00:15:07.439 } 00:15:07.439 ], 00:15:07.439 "driver_specific": { 00:15:07.439 "passthru": { 00:15:07.439 "name": "pt1", 00:15:07.439 "base_bdev_name": "malloc1" 00:15:07.439 } 00:15:07.439 } 00:15:07.439 }' 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:07.439 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.698 "name": "pt2", 00:15:07.698 "aliases": [ 00:15:07.698 "00000000-0000-0000-0000-000000000002" 00:15:07.698 ], 00:15:07.698 "product_name": "passthru", 00:15:07.698 "block_size": 512, 00:15:07.698 "num_blocks": 65536, 00:15:07.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.698 "assigned_rate_limits": { 00:15:07.698 "rw_ios_per_sec": 0, 00:15:07.698 "rw_mbytes_per_sec": 0, 00:15:07.698 "r_mbytes_per_sec": 0, 00:15:07.698 "w_mbytes_per_sec": 0 00:15:07.698 }, 00:15:07.698 "claimed": true, 00:15:07.698 "claim_type": "exclusive_write", 00:15:07.698 "zoned": false, 00:15:07.698 "supported_io_types": { 00:15:07.698 "read": true, 00:15:07.698 "write": true, 00:15:07.698 "unmap": true, 00:15:07.698 "flush": true, 00:15:07.698 "reset": true, 00:15:07.698 "nvme_admin": false, 00:15:07.698 "nvme_io": false, 00:15:07.698 "nvme_io_md": false, 00:15:07.698 "write_zeroes": true, 00:15:07.698 "zcopy": true, 00:15:07.698 "get_zone_info": false, 00:15:07.698 "zone_management": false, 00:15:07.698 "zone_append": false, 00:15:07.698 "compare": false, 00:15:07.698 "compare_and_write": false, 00:15:07.698 "abort": true, 00:15:07.698 "seek_hole": false, 00:15:07.698 "seek_data": false, 00:15:07.698 "copy": true, 00:15:07.698 "nvme_iov_md": false 00:15:07.698 }, 00:15:07.698 "memory_domains": [ 00:15:07.698 { 00:15:07.698 "dma_device_id": "system", 00:15:07.698 "dma_device_type": 1 00:15:07.698 }, 00:15:07.698 { 00:15:07.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.698 "dma_device_type": 2 00:15:07.698 } 00:15:07.698 ], 00:15:07.698 "driver_specific": { 00:15:07.698 "passthru": { 00:15:07.698 "name": "pt2", 00:15:07.698 "base_bdev_name": "malloc2" 00:15:07.698 } 00:15:07.698 } 00:15:07.698 }' 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.698 15:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.698 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.698 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.698 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.698 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.698 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:07.698 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:07.956 [2024-07-23 15:09:03.186178] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a9409fa6-184a-4024-ad5f-7f67c229c4fc '!=' a9409fa6-184a-4024-ad5f-7f67c229c4fc ']' 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 87984 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 87984 ']' 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 87984 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:07.956 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.957 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87984 00:15:07.957 killing process with pid 87984 00:15:07.957 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.957 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.957 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87984' 00:15:07.957 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 87984 00:15:07.957 [2024-07-23 15:09:03.239363] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.957 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 87984 00:15:07.957 [2024-07-23 15:09:03.239451] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.957 [2024-07-23 15:09:03.239500] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.957 [2024-07-23 15:09:03.239511] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:15:07.957 [2024-07-23 15:09:03.263188] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.253 15:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:08.253 00:15:08.253 real 0m7.660s 00:15:08.253 user 0m12.794s 00:15:08.253 sys 0m1.675s 00:15:08.253 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.253 15:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.253 ************************************ 00:15:08.253 END TEST raid_superblock_test 00:15:08.253 ************************************ 00:15:08.253 15:09:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:08.253 15:09:03 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:15:08.253 15:09:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:08.253 15:09:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.253 15:09:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.253 ************************************ 00:15:08.253 START TEST raid_read_error_test 00:15:08.253 ************************************ 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.d3pxpaF4t7 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=88295 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 88295 /var/tmp/spdk-raid.sock 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 88295 ']' 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:08.253 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.254 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.254 [2024-07-23 15:09:03.627685] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:08.254 [2024-07-23 15:09:03.627838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88295 ] 00:15:08.533 [2024-07-23 15:09:03.766246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.533 [2024-07-23 15:09:03.810554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.533 [2024-07-23 15:09:03.855638] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.533 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.533 15:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:08.533 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:08.533 15:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.790 BaseBdev1_malloc 00:15:08.791 15:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:09.049 true 00:15:09.049 15:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:09.308 [2024-07-23 15:09:04.590846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:09.308 [2024-07-23 15:09:04.590928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.308 [2024-07-23 15:09:04.590962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:15:09.308 [2024-07-23 15:09:04.590975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.308 [2024-07-23 15:09:04.593726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.308 [2024-07-23 15:09:04.593770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.308 BaseBdev1 00:15:09.308 15:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:09.308 15:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.567 BaseBdev2_malloc 00:15:09.567 15:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:09.826 true 00:15:09.826 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:10.086 [2024-07-23 15:09:05.340535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:10.086 [2024-07-23 15:09:05.340617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.086 [2024-07-23 15:09:05.340649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:15:10.086 [2024-07-23 15:09:05.340661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.086 [2024-07-23 15:09:05.343188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.086 [2024-07-23 15:09:05.343231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.086 BaseBdev2 00:15:10.086 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:10.086 [2024-07-23 15:09:05.512613] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.345 [2024-07-23 15:09:05.514890] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.345 [2024-07-23 15:09:05.515119] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:15:10.345 [2024-07-23 15:09:05.515134] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:10.345 [2024-07-23 15:09:05.515253] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:10.345 [2024-07-23 15:09:05.515610] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:15:10.345 [2024-07-23 15:09:05.515634] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007280 00:15:10.345 [2024-07-23 15:09:05.515761] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.345 "name": "raid_bdev1", 00:15:10.345 "uuid": "5b5c2d72-10cb-4445-87e5-b88a20dd8d65", 00:15:10.345 "strip_size_kb": 64, 00:15:10.345 "state": "online", 00:15:10.345 "raid_level": "raid0", 00:15:10.345 "superblock": true, 00:15:10.345 "num_base_bdevs": 2, 00:15:10.345 "num_base_bdevs_discovered": 2, 00:15:10.345 "num_base_bdevs_operational": 2, 00:15:10.345 "base_bdevs_list": [ 00:15:10.345 { 00:15:10.345 "name": "BaseBdev1", 00:15:10.345 "uuid": "6c063cfb-9c37-5b9d-baaf-bfef67e6f976", 00:15:10.345 "is_configured": true, 00:15:10.345 "data_offset": 2048, 00:15:10.345 "data_size": 63488 00:15:10.345 }, 00:15:10.345 { 00:15:10.345 "name": "BaseBdev2", 00:15:10.345 "uuid": "e58d1522-cfbc-5ec2-a143-85ac6265f7b6", 00:15:10.345 "is_configured": true, 00:15:10.345 "data_offset": 2048, 00:15:10.345 "data_size": 63488 00:15:10.345 } 00:15:10.345 ] 00:15:10.345 }' 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.345 15:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.912 15:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:10.912 15:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:10.912 [2024-07-23 15:09:06.169177] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:15:11.848 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.107 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.366 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.366 "name": "raid_bdev1", 00:15:12.366 "uuid": "5b5c2d72-10cb-4445-87e5-b88a20dd8d65", 00:15:12.366 "strip_size_kb": 64, 00:15:12.366 "state": "online", 00:15:12.366 "raid_level": "raid0", 00:15:12.366 "superblock": true, 00:15:12.366 "num_base_bdevs": 2, 00:15:12.366 "num_base_bdevs_discovered": 2, 00:15:12.366 "num_base_bdevs_operational": 2, 00:15:12.366 "base_bdevs_list": [ 00:15:12.366 { 00:15:12.366 "name": "BaseBdev1", 00:15:12.366 "uuid": "6c063cfb-9c37-5b9d-baaf-bfef67e6f976", 00:15:12.366 "is_configured": true, 00:15:12.366 "data_offset": 2048, 00:15:12.366 "data_size": 63488 00:15:12.366 }, 00:15:12.367 { 00:15:12.367 "name": "BaseBdev2", 00:15:12.367 "uuid": "e58d1522-cfbc-5ec2-a143-85ac6265f7b6", 00:15:12.367 "is_configured": true, 00:15:12.367 "data_offset": 2048, 00:15:12.367 "data_size": 63488 00:15:12.367 } 00:15:12.367 ] 00:15:12.367 }' 00:15:12.367 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.367 15:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.626 15:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:12.884 [2024-07-23 15:09:08.159690] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.884 [2024-07-23 15:09:08.159743] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.884 [2024-07-23 15:09:08.162304] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.884 [2024-07-23 15:09:08.162356] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.884 [2024-07-23 15:09:08.162391] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.884 [2024-07-23 15:09:08.162403] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state offline 00:15:12.884 0 00:15:12.884 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 88295 00:15:12.884 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 88295 ']' 00:15:12.884 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 88295 00:15:12.884 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:12.884 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.885 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88295 00:15:12.885 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.885 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.885 killing process with pid 88295 00:15:12.885 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88295' 00:15:12.885 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 88295 00:15:12.885 [2024-07-23 15:09:08.212156] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.885 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 88295 00:15:12.885 [2024-07-23 15:09:08.227415] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.d3pxpaF4t7 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:15:13.143 00:15:13.143 real 0m4.914s 00:15:13.143 user 0m7.551s 00:15:13.143 sys 0m0.919s 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.143 15:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.143 ************************************ 00:15:13.143 END TEST raid_read_error_test 00:15:13.143 ************************************ 00:15:13.143 15:09:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:13.143 15:09:08 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:15:13.143 15:09:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:13.143 15:09:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.143 15:09:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.143 ************************************ 00:15:13.143 START TEST raid_write_error_test 00:15:13.143 ************************************ 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:13.143 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.SmhqmmxyXh 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=88453 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 88453 /var/tmp/spdk-raid.sock 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 88453 ']' 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.144 15:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 [2024-07-23 15:09:08.620594] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:13.402 [2024-07-23 15:09:08.620825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88453 ] 00:15:13.402 [2024-07-23 15:09:08.774053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.402 [2024-07-23 15:09:08.817223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.660 [2024-07-23 15:09:08.861502] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.227 15:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.227 15:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:14.227 15:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:14.227 15:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.485 BaseBdev1_malloc 00:15:14.485 15:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:14.485 true 00:15:14.485 15:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:14.744 [2024-07-23 15:09:10.052700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:14.744 [2024-07-23 15:09:10.053009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.744 [2024-07-23 15:09:10.053060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:15:14.744 [2024-07-23 15:09:10.053074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.744 [2024-07-23 15:09:10.055886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.744 [2024-07-23 15:09:10.055926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.744 BaseBdev1 00:15:14.744 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:14.744 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.003 BaseBdev2_malloc 00:15:15.003 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:15.262 true 00:15:15.262 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:15.521 [2024-07-23 15:09:10.730219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:15.521 [2024-07-23 15:09:10.730435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.521 [2024-07-23 15:09:10.730508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:15:15.521 [2024-07-23 15:09:10.730593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.521 [2024-07-23 15:09:10.733285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.521 [2024-07-23 15:09:10.733426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:15.521 BaseBdev2 00:15:15.521 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:15.780 [2024-07-23 15:09:10.962315] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.780 [2024-07-23 15:09:10.964577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.780 [2024-07-23 15:09:10.964801] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:15:15.780 [2024-07-23 15:09:10.964822] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.780 [2024-07-23 15:09:10.964935] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:15.780 [2024-07-23 15:09:10.965457] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:15:15.780 [2024-07-23 15:09:10.965473] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007280 00:15:15.780 [2024-07-23 15:09:10.965610] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.780 15:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.780 15:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.780 "name": "raid_bdev1", 00:15:15.780 "uuid": "7b37016b-1022-4591-9a52-caf3e06adf5f", 00:15:15.780 "strip_size_kb": 64, 00:15:15.780 "state": "online", 00:15:15.780 "raid_level": "raid0", 00:15:15.780 "superblock": true, 00:15:15.780 "num_base_bdevs": 2, 00:15:15.780 "num_base_bdevs_discovered": 2, 00:15:15.780 "num_base_bdevs_operational": 2, 00:15:15.780 "base_bdevs_list": [ 00:15:15.780 { 00:15:15.780 "name": "BaseBdev1", 00:15:15.780 "uuid": "b879cfe8-d980-5568-a9d0-04fd782a9e15", 00:15:15.780 "is_configured": true, 00:15:15.780 "data_offset": 2048, 00:15:15.780 "data_size": 63488 00:15:15.780 }, 00:15:15.780 { 00:15:15.780 "name": "BaseBdev2", 00:15:15.780 "uuid": "f7456a8a-ec15-596c-9d49-bbcaf9cf84e6", 00:15:15.780 "is_configured": true, 00:15:15.780 "data_offset": 2048, 00:15:15.780 "data_size": 63488 00:15:15.780 } 00:15:15.780 ] 00:15:15.780 }' 00:15:15.780 15:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.780 15:09:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.377 15:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:16.377 15:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:16.378 [2024-07-23 15:09:11.558861] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.315 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.574 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.574 "name": "raid_bdev1", 00:15:17.574 "uuid": "7b37016b-1022-4591-9a52-caf3e06adf5f", 00:15:17.574 "strip_size_kb": 64, 00:15:17.574 "state": "online", 00:15:17.574 "raid_level": "raid0", 00:15:17.574 "superblock": true, 00:15:17.574 "num_base_bdevs": 2, 00:15:17.574 "num_base_bdevs_discovered": 2, 00:15:17.574 "num_base_bdevs_operational": 2, 00:15:17.574 "base_bdevs_list": [ 00:15:17.574 { 00:15:17.574 "name": "BaseBdev1", 00:15:17.574 "uuid": "b879cfe8-d980-5568-a9d0-04fd782a9e15", 00:15:17.574 "is_configured": true, 00:15:17.574 "data_offset": 2048, 00:15:17.574 "data_size": 63488 00:15:17.574 }, 00:15:17.574 { 00:15:17.574 "name": "BaseBdev2", 00:15:17.574 "uuid": "f7456a8a-ec15-596c-9d49-bbcaf9cf84e6", 00:15:17.574 "is_configured": true, 00:15:17.574 "data_offset": 2048, 00:15:17.574 "data_size": 63488 00:15:17.574 } 00:15:17.574 ] 00:15:17.574 }' 00:15:17.574 15:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.574 15:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.833 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:18.095 [2024-07-23 15:09:13.384858] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.095 [2024-07-23 15:09:13.385150] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.095 [2024-07-23 15:09:13.387561] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.095 [2024-07-23 15:09:13.387607] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.095 [2024-07-23 15:09:13.387647] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.095 [2024-07-23 15:09:13.387659] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state offline 00:15:18.095 0 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 88453 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 88453 ']' 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 88453 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88453 00:15:18.095 killing process with pid 88453 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88453' 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 88453 00:15:18.095 [2024-07-23 15:09:13.457126] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.095 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 88453 00:15:18.095 [2024-07-23 15:09:13.472319] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.SmhqmmxyXh 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.55 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.55 != \0\.\0\0 ]] 00:15:18.355 00:15:18.355 real 0m5.176s 00:15:18.355 user 0m7.632s 00:15:18.355 sys 0m0.921s 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.355 ************************************ 00:15:18.355 END TEST raid_write_error_test 00:15:18.355 ************************************ 00:15:18.355 15:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.355 15:09:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:18.355 15:09:13 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:18.355 15:09:13 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:18.355 15:09:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:18.355 15:09:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.355 15:09:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.355 ************************************ 00:15:18.355 START TEST raid_state_function_test 00:15:18.355 ************************************ 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:18.355 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:18.614 Process raid pid: 88606 00:15:18.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=88606 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 88606' 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 88606 /var/tmp/spdk-raid.sock 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 88606 ']' 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.614 15:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.614 [2024-07-23 15:09:13.854149] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:18.614 [2024-07-23 15:09:13.854612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.614 [2024-07-23 15:09:14.007520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.873 [2024-07-23 15:09:14.056262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.873 [2024-07-23 15:09:14.102315] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.441 15:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.441 15:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:19.441 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:19.700 [2024-07-23 15:09:14.948827] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.700 [2024-07-23 15:09:14.949080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.700 [2024-07-23 15:09:14.949239] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.700 [2024-07-23 15:09:14.949267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.700 15:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.958 15:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.958 "name": "Existed_Raid", 00:15:19.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.958 "strip_size_kb": 64, 00:15:19.958 "state": "configuring", 00:15:19.958 "raid_level": "concat", 00:15:19.958 "superblock": false, 00:15:19.958 "num_base_bdevs": 2, 00:15:19.958 "num_base_bdevs_discovered": 0, 00:15:19.958 "num_base_bdevs_operational": 2, 00:15:19.958 "base_bdevs_list": [ 00:15:19.958 { 00:15:19.958 "name": "BaseBdev1", 00:15:19.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.958 "is_configured": false, 00:15:19.958 "data_offset": 0, 00:15:19.958 "data_size": 0 00:15:19.958 }, 00:15:19.958 { 00:15:19.958 "name": "BaseBdev2", 00:15:19.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.958 "is_configured": false, 00:15:19.958 "data_offset": 0, 00:15:19.958 "data_size": 0 00:15:19.958 } 00:15:19.958 ] 00:15:19.958 }' 00:15:19.958 15:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.958 15:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.217 15:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:20.217 [2024-07-23 15:09:15.644841] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.217 [2024-07-23 15:09:15.645098] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:15:20.475 15:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:20.475 [2024-07-23 15:09:15.828923] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.475 [2024-07-23 15:09:15.829173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.475 [2024-07-23 15:09:15.829257] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.475 [2024-07-23 15:09:15.829301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.475 15:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.733 [2024-07-23 15:09:16.010413] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.733 BaseBdev1 00:15:20.733 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:20.733 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:20.733 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:20.733 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:20.733 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:20.733 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:20.733 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:20.991 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.991 [ 00:15:20.991 { 00:15:20.991 "name": "BaseBdev1", 00:15:20.991 "aliases": [ 00:15:20.991 "d189ffe0-ff86-41fa-941d-0c37933d5c68" 00:15:20.991 ], 00:15:20.991 "product_name": "Malloc disk", 00:15:20.991 "block_size": 512, 00:15:20.991 "num_blocks": 65536, 00:15:20.991 "uuid": "d189ffe0-ff86-41fa-941d-0c37933d5c68", 00:15:20.991 "assigned_rate_limits": { 00:15:20.991 "rw_ios_per_sec": 0, 00:15:20.991 "rw_mbytes_per_sec": 0, 00:15:20.991 "r_mbytes_per_sec": 0, 00:15:20.991 "w_mbytes_per_sec": 0 00:15:20.991 }, 00:15:20.991 "claimed": true, 00:15:20.991 "claim_type": "exclusive_write", 00:15:20.991 "zoned": false, 00:15:20.991 "supported_io_types": { 00:15:20.991 "read": true, 00:15:20.991 "write": true, 00:15:20.991 "unmap": true, 00:15:20.991 "flush": true, 00:15:20.992 "reset": true, 00:15:20.992 "nvme_admin": false, 00:15:20.992 "nvme_io": false, 00:15:20.992 "nvme_io_md": false, 00:15:20.992 "write_zeroes": true, 00:15:20.992 "zcopy": true, 00:15:20.992 "get_zone_info": false, 00:15:20.992 "zone_management": false, 00:15:20.992 "zone_append": false, 00:15:20.992 "compare": false, 00:15:20.992 "compare_and_write": false, 00:15:20.992 "abort": true, 00:15:20.992 "seek_hole": false, 00:15:20.992 "seek_data": false, 00:15:20.992 "copy": true, 00:15:20.992 "nvme_iov_md": false 00:15:20.992 }, 00:15:20.992 "memory_domains": [ 00:15:20.992 { 00:15:20.992 "dma_device_id": "system", 00:15:20.992 "dma_device_type": 1 00:15:20.992 }, 00:15:20.992 { 00:15:20.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.992 "dma_device_type": 2 00:15:20.992 } 00:15:20.992 ], 00:15:20.992 "driver_specific": {} 00:15:20.992 } 00:15:20.992 ] 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.992 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.250 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.250 "name": "Existed_Raid", 00:15:21.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.250 "strip_size_kb": 64, 00:15:21.250 "state": "configuring", 00:15:21.250 "raid_level": "concat", 00:15:21.250 "superblock": false, 00:15:21.250 "num_base_bdevs": 2, 00:15:21.250 "num_base_bdevs_discovered": 1, 00:15:21.251 "num_base_bdevs_operational": 2, 00:15:21.251 "base_bdevs_list": [ 00:15:21.251 { 00:15:21.251 "name": "BaseBdev1", 00:15:21.251 "uuid": "d189ffe0-ff86-41fa-941d-0c37933d5c68", 00:15:21.251 "is_configured": true, 00:15:21.251 "data_offset": 0, 00:15:21.251 "data_size": 65536 00:15:21.251 }, 00:15:21.251 { 00:15:21.251 "name": "BaseBdev2", 00:15:21.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.251 "is_configured": false, 00:15:21.251 "data_offset": 0, 00:15:21.251 "data_size": 0 00:15:21.251 } 00:15:21.251 ] 00:15:21.251 }' 00:15:21.251 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.251 15:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.509 15:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.767 [2024-07-23 15:09:17.038726] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.767 [2024-07-23 15:09:17.038816] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:15:21.767 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.026 [2024-07-23 15:09:17.222835] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.026 [2024-07-23 15:09:17.225093] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.026 [2024-07-23 15:09:17.225149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.026 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.285 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.285 "name": "Existed_Raid", 00:15:22.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.285 "strip_size_kb": 64, 00:15:22.285 "state": "configuring", 00:15:22.285 "raid_level": "concat", 00:15:22.285 "superblock": false, 00:15:22.285 "num_base_bdevs": 2, 00:15:22.285 "num_base_bdevs_discovered": 1, 00:15:22.285 "num_base_bdevs_operational": 2, 00:15:22.285 "base_bdevs_list": [ 00:15:22.285 { 00:15:22.285 "name": "BaseBdev1", 00:15:22.285 "uuid": "d189ffe0-ff86-41fa-941d-0c37933d5c68", 00:15:22.285 "is_configured": true, 00:15:22.285 "data_offset": 0, 00:15:22.285 "data_size": 65536 00:15:22.285 }, 00:15:22.285 { 00:15:22.285 "name": "BaseBdev2", 00:15:22.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.285 "is_configured": false, 00:15:22.285 "data_offset": 0, 00:15:22.285 "data_size": 0 00:15:22.285 } 00:15:22.285 ] 00:15:22.285 }' 00:15:22.285 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.285 15:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.543 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.543 [2024-07-23 15:09:17.939638] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.543 [2024-07-23 15:09:17.939991] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:15:22.543 [2024-07-23 15:09:17.940052] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:22.543 [2024-07-23 15:09:17.940310] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:22.543 [2024-07-23 15:09:17.940895] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:15:22.543 [2024-07-23 15:09:17.941056] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:15:22.543 [2024-07-23 15:09:17.941458] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.543 BaseBdev2 00:15:22.543 15:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:22.543 15:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:22.543 15:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:22.543 15:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:22.543 15:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:22.543 15:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:22.543 15:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.801 15:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.060 [ 00:15:23.060 { 00:15:23.060 "name": "BaseBdev2", 00:15:23.060 "aliases": [ 00:15:23.060 "61f09d4e-da04-422b-9e1b-bfba656a0b57" 00:15:23.060 ], 00:15:23.060 "product_name": "Malloc disk", 00:15:23.060 "block_size": 512, 00:15:23.060 "num_blocks": 65536, 00:15:23.060 "uuid": "61f09d4e-da04-422b-9e1b-bfba656a0b57", 00:15:23.060 "assigned_rate_limits": { 00:15:23.060 "rw_ios_per_sec": 0, 00:15:23.060 "rw_mbytes_per_sec": 0, 00:15:23.060 "r_mbytes_per_sec": 0, 00:15:23.060 "w_mbytes_per_sec": 0 00:15:23.060 }, 00:15:23.060 "claimed": true, 00:15:23.060 "claim_type": "exclusive_write", 00:15:23.060 "zoned": false, 00:15:23.060 "supported_io_types": { 00:15:23.060 "read": true, 00:15:23.060 "write": true, 00:15:23.060 "unmap": true, 00:15:23.060 "flush": true, 00:15:23.060 "reset": true, 00:15:23.060 "nvme_admin": false, 00:15:23.060 "nvme_io": false, 00:15:23.060 "nvme_io_md": false, 00:15:23.060 "write_zeroes": true, 00:15:23.060 "zcopy": true, 00:15:23.060 "get_zone_info": false, 00:15:23.060 "zone_management": false, 00:15:23.060 "zone_append": false, 00:15:23.060 "compare": false, 00:15:23.060 "compare_and_write": false, 00:15:23.060 "abort": true, 00:15:23.060 "seek_hole": false, 00:15:23.060 "seek_data": false, 00:15:23.060 "copy": true, 00:15:23.060 "nvme_iov_md": false 00:15:23.060 }, 00:15:23.060 "memory_domains": [ 00:15:23.060 { 00:15:23.060 "dma_device_id": "system", 00:15:23.060 "dma_device_type": 1 00:15:23.060 }, 00:15:23.060 { 00:15:23.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.060 "dma_device_type": 2 00:15:23.060 } 00:15:23.060 ], 00:15:23.060 "driver_specific": {} 00:15:23.060 } 00:15:23.060 ] 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.060 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.319 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.319 "name": "Existed_Raid", 00:15:23.319 "uuid": "93655cf4-9c40-4b01-bcc6-343263343f44", 00:15:23.319 "strip_size_kb": 64, 00:15:23.319 "state": "online", 00:15:23.319 "raid_level": "concat", 00:15:23.319 "superblock": false, 00:15:23.319 "num_base_bdevs": 2, 00:15:23.319 "num_base_bdevs_discovered": 2, 00:15:23.319 "num_base_bdevs_operational": 2, 00:15:23.319 "base_bdevs_list": [ 00:15:23.319 { 00:15:23.319 "name": "BaseBdev1", 00:15:23.319 "uuid": "d189ffe0-ff86-41fa-941d-0c37933d5c68", 00:15:23.319 "is_configured": true, 00:15:23.319 "data_offset": 0, 00:15:23.319 "data_size": 65536 00:15:23.319 }, 00:15:23.319 { 00:15:23.319 "name": "BaseBdev2", 00:15:23.319 "uuid": "61f09d4e-da04-422b-9e1b-bfba656a0b57", 00:15:23.319 "is_configured": true, 00:15:23.319 "data_offset": 0, 00:15:23.319 "data_size": 65536 00:15:23.319 } 00:15:23.319 ] 00:15:23.319 }' 00:15:23.319 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.319 15:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.577 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.577 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:23.577 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:23.577 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:23.577 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:23.577 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:23.577 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:23.577 15:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:23.854 [2024-07-23 15:09:19.168289] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.854 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:23.854 "name": "Existed_Raid", 00:15:23.854 "aliases": [ 00:15:23.854 "93655cf4-9c40-4b01-bcc6-343263343f44" 00:15:23.854 ], 00:15:23.854 "product_name": "Raid Volume", 00:15:23.854 "block_size": 512, 00:15:23.854 "num_blocks": 131072, 00:15:23.854 "uuid": "93655cf4-9c40-4b01-bcc6-343263343f44", 00:15:23.854 "assigned_rate_limits": { 00:15:23.854 "rw_ios_per_sec": 0, 00:15:23.854 "rw_mbytes_per_sec": 0, 00:15:23.854 "r_mbytes_per_sec": 0, 00:15:23.854 "w_mbytes_per_sec": 0 00:15:23.854 }, 00:15:23.854 "claimed": false, 00:15:23.854 "zoned": false, 00:15:23.854 "supported_io_types": { 00:15:23.854 "read": true, 00:15:23.854 "write": true, 00:15:23.854 "unmap": true, 00:15:23.854 "flush": true, 00:15:23.854 "reset": true, 00:15:23.854 "nvme_admin": false, 00:15:23.854 "nvme_io": false, 00:15:23.854 "nvme_io_md": false, 00:15:23.854 "write_zeroes": true, 00:15:23.854 "zcopy": false, 00:15:23.854 "get_zone_info": false, 00:15:23.854 "zone_management": false, 00:15:23.854 "zone_append": false, 00:15:23.854 "compare": false, 00:15:23.854 "compare_and_write": false, 00:15:23.854 "abort": false, 00:15:23.854 "seek_hole": false, 00:15:23.854 "seek_data": false, 00:15:23.854 "copy": false, 00:15:23.854 "nvme_iov_md": false 00:15:23.854 }, 00:15:23.854 "memory_domains": [ 00:15:23.854 { 00:15:23.854 "dma_device_id": "system", 00:15:23.854 "dma_device_type": 1 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.854 "dma_device_type": 2 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "dma_device_id": "system", 00:15:23.854 "dma_device_type": 1 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.854 "dma_device_type": 2 00:15:23.854 } 00:15:23.854 ], 00:15:23.854 "driver_specific": { 00:15:23.854 "raid": { 00:15:23.854 "uuid": "93655cf4-9c40-4b01-bcc6-343263343f44", 00:15:23.854 "strip_size_kb": 64, 00:15:23.854 "state": "online", 00:15:23.854 "raid_level": "concat", 00:15:23.854 "superblock": false, 00:15:23.854 "num_base_bdevs": 2, 00:15:23.854 "num_base_bdevs_discovered": 2, 00:15:23.854 "num_base_bdevs_operational": 2, 00:15:23.854 "base_bdevs_list": [ 00:15:23.854 { 00:15:23.854 "name": "BaseBdev1", 00:15:23.854 "uuid": "d189ffe0-ff86-41fa-941d-0c37933d5c68", 00:15:23.854 "is_configured": true, 00:15:23.854 "data_offset": 0, 00:15:23.854 "data_size": 65536 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "name": "BaseBdev2", 00:15:23.854 "uuid": "61f09d4e-da04-422b-9e1b-bfba656a0b57", 00:15:23.854 "is_configured": true, 00:15:23.854 "data_offset": 0, 00:15:23.854 "data_size": 65536 00:15:23.854 } 00:15:23.854 ] 00:15:23.854 } 00:15:23.854 } 00:15:23.854 }' 00:15:23.854 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.854 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:23.854 BaseBdev2' 00:15:23.855 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:23.855 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:23.855 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:24.129 "name": "BaseBdev1", 00:15:24.129 "aliases": [ 00:15:24.129 "d189ffe0-ff86-41fa-941d-0c37933d5c68" 00:15:24.129 ], 00:15:24.129 "product_name": "Malloc disk", 00:15:24.129 "block_size": 512, 00:15:24.129 "num_blocks": 65536, 00:15:24.129 "uuid": "d189ffe0-ff86-41fa-941d-0c37933d5c68", 00:15:24.129 "assigned_rate_limits": { 00:15:24.129 "rw_ios_per_sec": 0, 00:15:24.129 "rw_mbytes_per_sec": 0, 00:15:24.129 "r_mbytes_per_sec": 0, 00:15:24.129 "w_mbytes_per_sec": 0 00:15:24.129 }, 00:15:24.129 "claimed": true, 00:15:24.129 "claim_type": "exclusive_write", 00:15:24.129 "zoned": false, 00:15:24.129 "supported_io_types": { 00:15:24.129 "read": true, 00:15:24.129 "write": true, 00:15:24.129 "unmap": true, 00:15:24.129 "flush": true, 00:15:24.129 "reset": true, 00:15:24.129 "nvme_admin": false, 00:15:24.129 "nvme_io": false, 00:15:24.129 "nvme_io_md": false, 00:15:24.129 "write_zeroes": true, 00:15:24.129 "zcopy": true, 00:15:24.129 "get_zone_info": false, 00:15:24.129 "zone_management": false, 00:15:24.129 "zone_append": false, 00:15:24.129 "compare": false, 00:15:24.129 "compare_and_write": false, 00:15:24.129 "abort": true, 00:15:24.129 "seek_hole": false, 00:15:24.129 "seek_data": false, 00:15:24.129 "copy": true, 00:15:24.129 "nvme_iov_md": false 00:15:24.129 }, 00:15:24.129 "memory_domains": [ 00:15:24.129 { 00:15:24.129 "dma_device_id": "system", 00:15:24.129 "dma_device_type": 1 00:15:24.129 }, 00:15:24.129 { 00:15:24.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.129 "dma_device_type": 2 00:15:24.129 } 00:15:24.129 ], 00:15:24.129 "driver_specific": {} 00:15:24.129 }' 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:24.129 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:24.387 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:24.387 "name": "BaseBdev2", 00:15:24.387 "aliases": [ 00:15:24.387 "61f09d4e-da04-422b-9e1b-bfba656a0b57" 00:15:24.387 ], 00:15:24.387 "product_name": "Malloc disk", 00:15:24.387 "block_size": 512, 00:15:24.387 "num_blocks": 65536, 00:15:24.387 "uuid": "61f09d4e-da04-422b-9e1b-bfba656a0b57", 00:15:24.387 "assigned_rate_limits": { 00:15:24.387 "rw_ios_per_sec": 0, 00:15:24.387 "rw_mbytes_per_sec": 0, 00:15:24.387 "r_mbytes_per_sec": 0, 00:15:24.387 "w_mbytes_per_sec": 0 00:15:24.387 }, 00:15:24.387 "claimed": true, 00:15:24.387 "claim_type": "exclusive_write", 00:15:24.387 "zoned": false, 00:15:24.387 "supported_io_types": { 00:15:24.387 "read": true, 00:15:24.387 "write": true, 00:15:24.387 "unmap": true, 00:15:24.387 "flush": true, 00:15:24.387 "reset": true, 00:15:24.387 "nvme_admin": false, 00:15:24.387 "nvme_io": false, 00:15:24.387 "nvme_io_md": false, 00:15:24.387 "write_zeroes": true, 00:15:24.387 "zcopy": true, 00:15:24.387 "get_zone_info": false, 00:15:24.387 "zone_management": false, 00:15:24.387 "zone_append": false, 00:15:24.387 "compare": false, 00:15:24.387 "compare_and_write": false, 00:15:24.387 "abort": true, 00:15:24.387 "seek_hole": false, 00:15:24.387 "seek_data": false, 00:15:24.387 "copy": true, 00:15:24.387 "nvme_iov_md": false 00:15:24.387 }, 00:15:24.387 "memory_domains": [ 00:15:24.387 { 00:15:24.387 "dma_device_id": "system", 00:15:24.387 "dma_device_type": 1 00:15:24.387 }, 00:15:24.387 { 00:15:24.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.387 "dma_device_type": 2 00:15:24.387 } 00:15:24.387 ], 00:15:24.387 "driver_specific": {} 00:15:24.387 }' 00:15:24.387 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.387 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:24.646 15:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:24.904 [2024-07-23 15:09:20.136353] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.904 [2024-07-23 15:09:20.136583] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.904 [2024-07-23 15:09:20.136674] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.904 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.163 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.163 "name": "Existed_Raid", 00:15:25.163 "uuid": "93655cf4-9c40-4b01-bcc6-343263343f44", 00:15:25.163 "strip_size_kb": 64, 00:15:25.163 "state": "offline", 00:15:25.163 "raid_level": "concat", 00:15:25.163 "superblock": false, 00:15:25.163 "num_base_bdevs": 2, 00:15:25.163 "num_base_bdevs_discovered": 1, 00:15:25.163 "num_base_bdevs_operational": 1, 00:15:25.163 "base_bdevs_list": [ 00:15:25.163 { 00:15:25.163 "name": null, 00:15:25.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.163 "is_configured": false, 00:15:25.163 "data_offset": 0, 00:15:25.163 "data_size": 65536 00:15:25.163 }, 00:15:25.163 { 00:15:25.163 "name": "BaseBdev2", 00:15:25.163 "uuid": "61f09d4e-da04-422b-9e1b-bfba656a0b57", 00:15:25.163 "is_configured": true, 00:15:25.163 "data_offset": 0, 00:15:25.163 "data_size": 65536 00:15:25.163 } 00:15:25.163 ] 00:15:25.163 }' 00:15:25.163 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.163 15:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.422 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:25.422 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:25.422 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.422 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:25.680 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:25.680 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.680 15:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:25.939 [2024-07-23 15:09:21.161331] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.939 [2024-07-23 15:09:21.161417] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:15:25.939 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:25.939 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:25.939 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.939 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 88606 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 88606 ']' 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 88606 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88606 00:15:26.198 killing process with pid 88606 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88606' 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 88606 00:15:26.198 [2024-07-23 15:09:21.477062] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.198 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 88606 00:15:26.198 [2024-07-23 15:09:21.477142] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:26.457 00:15:26.457 real 0m7.933s 00:15:26.457 user 0m13.312s 00:15:26.457 sys 0m1.687s 00:15:26.457 ************************************ 00:15:26.457 END TEST raid_state_function_test 00:15:26.457 ************************************ 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.457 15:09:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:26.457 15:09:21 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:26.457 15:09:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:26.457 15:09:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.457 15:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.457 ************************************ 00:15:26.457 START TEST raid_state_function_test_sb 00:15:26.457 ************************************ 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.457 Process raid pid: 88930 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=88930 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 88930' 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 88930 /var/tmp/spdk-raid.sock 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 88930 ']' 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.457 15:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.457 [2024-07-23 15:09:21.838140] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:26.457 [2024-07-23 15:09:21.838281] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.716 [2024-07-23 15:09:21.979234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.716 [2024-07-23 15:09:22.026898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.716 [2024-07-23 15:09:22.071504] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.284 15:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.284 15:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:27.284 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:27.543 [2024-07-23 15:09:22.853230] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.543 [2024-07-23 15:09:22.853306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.543 [2024-07-23 15:09:22.853319] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.543 [2024-07-23 15:09:22.853333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.543 15:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.802 15:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.802 "name": "Existed_Raid", 00:15:27.802 "uuid": "c75df116-8904-459d-b7f8-edab6978bdf1", 00:15:27.802 "strip_size_kb": 64, 00:15:27.802 "state": "configuring", 00:15:27.802 "raid_level": "concat", 00:15:27.802 "superblock": true, 00:15:27.802 "num_base_bdevs": 2, 00:15:27.802 "num_base_bdevs_discovered": 0, 00:15:27.802 "num_base_bdevs_operational": 2, 00:15:27.802 "base_bdevs_list": [ 00:15:27.802 { 00:15:27.802 "name": "BaseBdev1", 00:15:27.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.802 "is_configured": false, 00:15:27.802 "data_offset": 0, 00:15:27.802 "data_size": 0 00:15:27.802 }, 00:15:27.802 { 00:15:27.802 "name": "BaseBdev2", 00:15:27.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.802 "is_configured": false, 00:15:27.802 "data_offset": 0, 00:15:27.802 "data_size": 0 00:15:27.802 } 00:15:27.802 ] 00:15:27.802 }' 00:15:27.802 15:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.802 15:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.062 15:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.320 [2024-07-23 15:09:23.561259] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.320 [2024-07-23 15:09:23.561510] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:15:28.320 15:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:28.320 [2024-07-23 15:09:23.733345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.320 [2024-07-23 15:09:23.733585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.320 [2024-07-23 15:09:23.733733] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.320 [2024-07-23 15:09:23.733760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.579 15:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.579 [2024-07-23 15:09:23.914894] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.579 BaseBdev1 00:15:28.579 15:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:28.579 15:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:28.579 15:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:28.579 15:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:28.579 15:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:28.579 15:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:28.579 15:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.837 15:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.096 [ 00:15:29.096 { 00:15:29.096 "name": "BaseBdev1", 00:15:29.096 "aliases": [ 00:15:29.096 "f265646e-e2fb-42ef-b786-6f0bfe92d50e" 00:15:29.096 ], 00:15:29.096 "product_name": "Malloc disk", 00:15:29.096 "block_size": 512, 00:15:29.096 "num_blocks": 65536, 00:15:29.096 "uuid": "f265646e-e2fb-42ef-b786-6f0bfe92d50e", 00:15:29.096 "assigned_rate_limits": { 00:15:29.096 "rw_ios_per_sec": 0, 00:15:29.096 "rw_mbytes_per_sec": 0, 00:15:29.096 "r_mbytes_per_sec": 0, 00:15:29.096 "w_mbytes_per_sec": 0 00:15:29.096 }, 00:15:29.096 "claimed": true, 00:15:29.096 "claim_type": "exclusive_write", 00:15:29.096 "zoned": false, 00:15:29.096 "supported_io_types": { 00:15:29.096 "read": true, 00:15:29.096 "write": true, 00:15:29.096 "unmap": true, 00:15:29.096 "flush": true, 00:15:29.096 "reset": true, 00:15:29.096 "nvme_admin": false, 00:15:29.096 "nvme_io": false, 00:15:29.096 "nvme_io_md": false, 00:15:29.096 "write_zeroes": true, 00:15:29.096 "zcopy": true, 00:15:29.096 "get_zone_info": false, 00:15:29.096 "zone_management": false, 00:15:29.096 "zone_append": false, 00:15:29.096 "compare": false, 00:15:29.096 "compare_and_write": false, 00:15:29.096 "abort": true, 00:15:29.096 "seek_hole": false, 00:15:29.096 "seek_data": false, 00:15:29.096 "copy": true, 00:15:29.096 "nvme_iov_md": false 00:15:29.096 }, 00:15:29.096 "memory_domains": [ 00:15:29.096 { 00:15:29.096 "dma_device_id": "system", 00:15:29.096 "dma_device_type": 1 00:15:29.096 }, 00:15:29.096 { 00:15:29.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.096 "dma_device_type": 2 00:15:29.096 } 00:15:29.096 ], 00:15:29.096 "driver_specific": {} 00:15:29.096 } 00:15:29.096 ] 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.096 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.355 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.355 "name": "Existed_Raid", 00:15:29.355 "uuid": "a18a8131-5038-4c69-868a-92d9da0b6f8f", 00:15:29.355 "strip_size_kb": 64, 00:15:29.355 "state": "configuring", 00:15:29.355 "raid_level": "concat", 00:15:29.355 "superblock": true, 00:15:29.355 "num_base_bdevs": 2, 00:15:29.355 "num_base_bdevs_discovered": 1, 00:15:29.355 "num_base_bdevs_operational": 2, 00:15:29.355 "base_bdevs_list": [ 00:15:29.355 { 00:15:29.355 "name": "BaseBdev1", 00:15:29.355 "uuid": "f265646e-e2fb-42ef-b786-6f0bfe92d50e", 00:15:29.355 "is_configured": true, 00:15:29.355 "data_offset": 2048, 00:15:29.355 "data_size": 63488 00:15:29.355 }, 00:15:29.355 { 00:15:29.355 "name": "BaseBdev2", 00:15:29.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.355 "is_configured": false, 00:15:29.355 "data_offset": 0, 00:15:29.355 "data_size": 0 00:15:29.355 } 00:15:29.355 ] 00:15:29.355 }' 00:15:29.355 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.355 15:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.614 15:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:29.873 [2024-07-23 15:09:25.087257] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.873 [2024-07-23 15:09:25.087329] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:29.873 [2024-07-23 15:09:25.271377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.873 [2024-07-23 15:09:25.273542] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.873 [2024-07-23 15:09:25.273599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.873 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.441 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.441 "name": "Existed_Raid", 00:15:30.441 "uuid": "709bd6be-3c22-4e42-b820-3e9f749dfc4c", 00:15:30.441 "strip_size_kb": 64, 00:15:30.441 "state": "configuring", 00:15:30.441 "raid_level": "concat", 00:15:30.441 "superblock": true, 00:15:30.441 "num_base_bdevs": 2, 00:15:30.441 "num_base_bdevs_discovered": 1, 00:15:30.441 "num_base_bdevs_operational": 2, 00:15:30.441 "base_bdevs_list": [ 00:15:30.441 { 00:15:30.441 "name": "BaseBdev1", 00:15:30.441 "uuid": "f265646e-e2fb-42ef-b786-6f0bfe92d50e", 00:15:30.441 "is_configured": true, 00:15:30.441 "data_offset": 2048, 00:15:30.441 "data_size": 63488 00:15:30.441 }, 00:15:30.441 { 00:15:30.441 "name": "BaseBdev2", 00:15:30.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.441 "is_configured": false, 00:15:30.441 "data_offset": 0, 00:15:30.441 "data_size": 0 00:15:30.441 } 00:15:30.441 ] 00:15:30.441 }' 00:15:30.441 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.441 15:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.441 15:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.700 [2024-07-23 15:09:26.006632] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.700 [2024-07-23 15:09:26.006879] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:15:30.700 [2024-07-23 15:09:26.006912] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.700 [2024-07-23 15:09:26.007060] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:30.700 [2024-07-23 15:09:26.007500] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:15:30.700 [2024-07-23 15:09:26.007546] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:15:30.700 [2024-07-23 15:09:26.007700] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.700 BaseBdev2 00:15:30.700 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:30.700 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:30.700 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.700 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:30.700 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.700 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.700 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.960 [ 00:15:30.960 { 00:15:30.960 "name": "BaseBdev2", 00:15:30.960 "aliases": [ 00:15:30.960 "fa6e493e-93de-42a4-8d63-3629cb24cb36" 00:15:30.960 ], 00:15:30.960 "product_name": "Malloc disk", 00:15:30.960 "block_size": 512, 00:15:30.960 "num_blocks": 65536, 00:15:30.960 "uuid": "fa6e493e-93de-42a4-8d63-3629cb24cb36", 00:15:30.960 "assigned_rate_limits": { 00:15:30.960 "rw_ios_per_sec": 0, 00:15:30.960 "rw_mbytes_per_sec": 0, 00:15:30.960 "r_mbytes_per_sec": 0, 00:15:30.960 "w_mbytes_per_sec": 0 00:15:30.960 }, 00:15:30.960 "claimed": true, 00:15:30.960 "claim_type": "exclusive_write", 00:15:30.960 "zoned": false, 00:15:30.960 "supported_io_types": { 00:15:30.960 "read": true, 00:15:30.960 "write": true, 00:15:30.960 "unmap": true, 00:15:30.960 "flush": true, 00:15:30.960 "reset": true, 00:15:30.960 "nvme_admin": false, 00:15:30.960 "nvme_io": false, 00:15:30.960 "nvme_io_md": false, 00:15:30.960 "write_zeroes": true, 00:15:30.960 "zcopy": true, 00:15:30.960 "get_zone_info": false, 00:15:30.960 "zone_management": false, 00:15:30.960 "zone_append": false, 00:15:30.960 "compare": false, 00:15:30.960 "compare_and_write": false, 00:15:30.960 "abort": true, 00:15:30.960 "seek_hole": false, 00:15:30.960 "seek_data": false, 00:15:30.960 "copy": true, 00:15:30.960 "nvme_iov_md": false 00:15:30.960 }, 00:15:30.960 "memory_domains": [ 00:15:30.960 { 00:15:30.960 "dma_device_id": "system", 00:15:30.960 "dma_device_type": 1 00:15:30.960 }, 00:15:30.960 { 00:15:30.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.960 "dma_device_type": 2 00:15:30.960 } 00:15:30.960 ], 00:15:30.960 "driver_specific": {} 00:15:30.960 } 00:15:30.960 ] 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.960 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.218 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.218 "name": "Existed_Raid", 00:15:31.218 "uuid": "709bd6be-3c22-4e42-b820-3e9f749dfc4c", 00:15:31.218 "strip_size_kb": 64, 00:15:31.218 "state": "online", 00:15:31.218 "raid_level": "concat", 00:15:31.218 "superblock": true, 00:15:31.218 "num_base_bdevs": 2, 00:15:31.218 "num_base_bdevs_discovered": 2, 00:15:31.218 "num_base_bdevs_operational": 2, 00:15:31.218 "base_bdevs_list": [ 00:15:31.218 { 00:15:31.218 "name": "BaseBdev1", 00:15:31.218 "uuid": "f265646e-e2fb-42ef-b786-6f0bfe92d50e", 00:15:31.218 "is_configured": true, 00:15:31.218 "data_offset": 2048, 00:15:31.218 "data_size": 63488 00:15:31.218 }, 00:15:31.218 { 00:15:31.218 "name": "BaseBdev2", 00:15:31.218 "uuid": "fa6e493e-93de-42a4-8d63-3629cb24cb36", 00:15:31.218 "is_configured": true, 00:15:31.218 "data_offset": 2048, 00:15:31.218 "data_size": 63488 00:15:31.218 } 00:15:31.218 ] 00:15:31.218 }' 00:15:31.218 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.218 15:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.827 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.827 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:31.827 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:31.827 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:31.827 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:31.827 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:31.827 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:31.827 15:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:31.827 [2024-07-23 15:09:27.187271] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.827 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:31.827 "name": "Existed_Raid", 00:15:31.827 "aliases": [ 00:15:31.827 "709bd6be-3c22-4e42-b820-3e9f749dfc4c" 00:15:31.827 ], 00:15:31.827 "product_name": "Raid Volume", 00:15:31.827 "block_size": 512, 00:15:31.827 "num_blocks": 126976, 00:15:31.827 "uuid": "709bd6be-3c22-4e42-b820-3e9f749dfc4c", 00:15:31.827 "assigned_rate_limits": { 00:15:31.827 "rw_ios_per_sec": 0, 00:15:31.827 "rw_mbytes_per_sec": 0, 00:15:31.827 "r_mbytes_per_sec": 0, 00:15:31.827 "w_mbytes_per_sec": 0 00:15:31.827 }, 00:15:31.827 "claimed": false, 00:15:31.827 "zoned": false, 00:15:31.827 "supported_io_types": { 00:15:31.827 "read": true, 00:15:31.827 "write": true, 00:15:31.827 "unmap": true, 00:15:31.827 "flush": true, 00:15:31.827 "reset": true, 00:15:31.827 "nvme_admin": false, 00:15:31.827 "nvme_io": false, 00:15:31.827 "nvme_io_md": false, 00:15:31.827 "write_zeroes": true, 00:15:31.827 "zcopy": false, 00:15:31.827 "get_zone_info": false, 00:15:31.827 "zone_management": false, 00:15:31.827 "zone_append": false, 00:15:31.827 "compare": false, 00:15:31.827 "compare_and_write": false, 00:15:31.827 "abort": false, 00:15:31.827 "seek_hole": false, 00:15:31.827 "seek_data": false, 00:15:31.827 "copy": false, 00:15:31.827 "nvme_iov_md": false 00:15:31.827 }, 00:15:31.827 "memory_domains": [ 00:15:31.827 { 00:15:31.827 "dma_device_id": "system", 00:15:31.827 "dma_device_type": 1 00:15:31.827 }, 00:15:31.827 { 00:15:31.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.827 "dma_device_type": 2 00:15:31.827 }, 00:15:31.827 { 00:15:31.827 "dma_device_id": "system", 00:15:31.827 "dma_device_type": 1 00:15:31.827 }, 00:15:31.827 { 00:15:31.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.827 "dma_device_type": 2 00:15:31.827 } 00:15:31.827 ], 00:15:31.827 "driver_specific": { 00:15:31.827 "raid": { 00:15:31.827 "uuid": "709bd6be-3c22-4e42-b820-3e9f749dfc4c", 00:15:31.827 "strip_size_kb": 64, 00:15:31.827 "state": "online", 00:15:31.827 "raid_level": "concat", 00:15:31.827 "superblock": true, 00:15:31.827 "num_base_bdevs": 2, 00:15:31.827 "num_base_bdevs_discovered": 2, 00:15:31.827 "num_base_bdevs_operational": 2, 00:15:31.827 "base_bdevs_list": [ 00:15:31.827 { 00:15:31.827 "name": "BaseBdev1", 00:15:31.827 "uuid": "f265646e-e2fb-42ef-b786-6f0bfe92d50e", 00:15:31.827 "is_configured": true, 00:15:31.827 "data_offset": 2048, 00:15:31.827 "data_size": 63488 00:15:31.827 }, 00:15:31.827 { 00:15:31.827 "name": "BaseBdev2", 00:15:31.827 "uuid": "fa6e493e-93de-42a4-8d63-3629cb24cb36", 00:15:31.827 "is_configured": true, 00:15:31.827 "data_offset": 2048, 00:15:31.827 "data_size": 63488 00:15:31.827 } 00:15:31.827 ] 00:15:31.827 } 00:15:31.827 } 00:15:31.827 }' 00:15:31.827 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.827 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:31.827 BaseBdev2' 00:15:31.828 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.828 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:31.828 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:32.087 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.087 "name": "BaseBdev1", 00:15:32.087 "aliases": [ 00:15:32.087 "f265646e-e2fb-42ef-b786-6f0bfe92d50e" 00:15:32.087 ], 00:15:32.087 "product_name": "Malloc disk", 00:15:32.087 "block_size": 512, 00:15:32.087 "num_blocks": 65536, 00:15:32.087 "uuid": "f265646e-e2fb-42ef-b786-6f0bfe92d50e", 00:15:32.087 "assigned_rate_limits": { 00:15:32.087 "rw_ios_per_sec": 0, 00:15:32.087 "rw_mbytes_per_sec": 0, 00:15:32.087 "r_mbytes_per_sec": 0, 00:15:32.087 "w_mbytes_per_sec": 0 00:15:32.087 }, 00:15:32.087 "claimed": true, 00:15:32.087 "claim_type": "exclusive_write", 00:15:32.087 "zoned": false, 00:15:32.087 "supported_io_types": { 00:15:32.087 "read": true, 00:15:32.087 "write": true, 00:15:32.087 "unmap": true, 00:15:32.087 "flush": true, 00:15:32.087 "reset": true, 00:15:32.087 "nvme_admin": false, 00:15:32.087 "nvme_io": false, 00:15:32.087 "nvme_io_md": false, 00:15:32.087 "write_zeroes": true, 00:15:32.087 "zcopy": true, 00:15:32.087 "get_zone_info": false, 00:15:32.087 "zone_management": false, 00:15:32.087 "zone_append": false, 00:15:32.087 "compare": false, 00:15:32.087 "compare_and_write": false, 00:15:32.087 "abort": true, 00:15:32.087 "seek_hole": false, 00:15:32.087 "seek_data": false, 00:15:32.087 "copy": true, 00:15:32.087 "nvme_iov_md": false 00:15:32.087 }, 00:15:32.087 "memory_domains": [ 00:15:32.087 { 00:15:32.087 "dma_device_id": "system", 00:15:32.087 "dma_device_type": 1 00:15:32.087 }, 00:15:32.087 { 00:15:32.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.087 "dma_device_type": 2 00:15:32.087 } 00:15:32.087 ], 00:15:32.087 "driver_specific": {} 00:15:32.087 }' 00:15:32.087 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.087 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.087 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.087 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.087 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.087 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.087 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.346 "name": "BaseBdev2", 00:15:32.346 "aliases": [ 00:15:32.346 "fa6e493e-93de-42a4-8d63-3629cb24cb36" 00:15:32.346 ], 00:15:32.346 "product_name": "Malloc disk", 00:15:32.346 "block_size": 512, 00:15:32.346 "num_blocks": 65536, 00:15:32.346 "uuid": "fa6e493e-93de-42a4-8d63-3629cb24cb36", 00:15:32.346 "assigned_rate_limits": { 00:15:32.346 "rw_ios_per_sec": 0, 00:15:32.346 "rw_mbytes_per_sec": 0, 00:15:32.346 "r_mbytes_per_sec": 0, 00:15:32.346 "w_mbytes_per_sec": 0 00:15:32.346 }, 00:15:32.346 "claimed": true, 00:15:32.346 "claim_type": "exclusive_write", 00:15:32.346 "zoned": false, 00:15:32.346 "supported_io_types": { 00:15:32.346 "read": true, 00:15:32.346 "write": true, 00:15:32.346 "unmap": true, 00:15:32.346 "flush": true, 00:15:32.346 "reset": true, 00:15:32.346 "nvme_admin": false, 00:15:32.346 "nvme_io": false, 00:15:32.346 "nvme_io_md": false, 00:15:32.346 "write_zeroes": true, 00:15:32.346 "zcopy": true, 00:15:32.346 "get_zone_info": false, 00:15:32.346 "zone_management": false, 00:15:32.346 "zone_append": false, 00:15:32.346 "compare": false, 00:15:32.346 "compare_and_write": false, 00:15:32.346 "abort": true, 00:15:32.346 "seek_hole": false, 00:15:32.346 "seek_data": false, 00:15:32.346 "copy": true, 00:15:32.346 "nvme_iov_md": false 00:15:32.346 }, 00:15:32.346 "memory_domains": [ 00:15:32.346 { 00:15:32.346 "dma_device_id": "system", 00:15:32.346 "dma_device_type": 1 00:15:32.346 }, 00:15:32.346 { 00:15:32.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.346 "dma_device_type": 2 00:15:32.346 } 00:15:32.346 ], 00:15:32.346 "driver_specific": {} 00:15:32.346 }' 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.346 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.606 15:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:32.606 [2024-07-23 15:09:27.995296] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.606 [2024-07-23 15:09:27.995343] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.606 [2024-07-23 15:09:27.995405] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.606 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.173 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.173 "name": "Existed_Raid", 00:15:33.173 "uuid": "709bd6be-3c22-4e42-b820-3e9f749dfc4c", 00:15:33.173 "strip_size_kb": 64, 00:15:33.173 "state": "offline", 00:15:33.173 "raid_level": "concat", 00:15:33.173 "superblock": true, 00:15:33.173 "num_base_bdevs": 2, 00:15:33.173 "num_base_bdevs_discovered": 1, 00:15:33.173 "num_base_bdevs_operational": 1, 00:15:33.173 "base_bdevs_list": [ 00:15:33.173 { 00:15:33.173 "name": null, 00:15:33.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.173 "is_configured": false, 00:15:33.173 "data_offset": 2048, 00:15:33.173 "data_size": 63488 00:15:33.173 }, 00:15:33.173 { 00:15:33.173 "name": "BaseBdev2", 00:15:33.173 "uuid": "fa6e493e-93de-42a4-8d63-3629cb24cb36", 00:15:33.173 "is_configured": true, 00:15:33.173 "data_offset": 2048, 00:15:33.173 "data_size": 63488 00:15:33.173 } 00:15:33.173 ] 00:15:33.173 }' 00:15:33.173 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.173 15:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.431 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:33.431 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:33.431 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.431 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:33.688 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:33.688 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.688 15:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:33.688 [2024-07-23 15:09:29.052155] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.688 [2024-07-23 15:09:29.052226] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:15:33.688 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:33.688 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:33.688 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.688 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 88930 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 88930 ']' 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 88930 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88930 00:15:33.947 killing process with pid 88930 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88930' 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 88930 00:15:33.947 [2024-07-23 15:09:29.295373] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.947 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 88930 00:15:33.947 [2024-07-23 15:09:29.295457] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.206 15:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:34.206 00:15:34.206 real 0m7.756s 00:15:34.206 user 0m12.937s 00:15:34.206 sys 0m1.721s 00:15:34.206 ************************************ 00:15:34.206 END TEST raid_state_function_test_sb 00:15:34.206 ************************************ 00:15:34.206 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.206 15:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.206 15:09:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:34.206 15:09:29 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:34.206 15:09:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:34.206 15:09:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.206 15:09:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.206 ************************************ 00:15:34.206 START TEST raid_superblock_test 00:15:34.206 ************************************ 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=89253 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 89253 /var/tmp/spdk-raid.sock 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 89253 ']' 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.206 15:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.465 [2024-07-23 15:09:29.671700] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:34.465 [2024-07-23 15:09:29.671930] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89253 ] 00:15:34.465 [2024-07-23 15:09:29.823768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.465 [2024-07-23 15:09:29.871374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.724 [2024-07-23 15:09:29.915873] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.292 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:35.550 malloc1 00:15:35.550 15:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.809 [2024-07-23 15:09:31.003425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.809 [2024-07-23 15:09:31.003519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.809 [2024-07-23 15:09:31.003554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:15:35.809 [2024-07-23 15:09:31.003572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.809 [2024-07-23 15:09:31.006423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.809 [2024-07-23 15:09:31.006470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.809 pt1 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:35.809 malloc2 00:15:35.809 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.067 [2024-07-23 15:09:31.432912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.067 [2024-07-23 15:09:31.432998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.067 [2024-07-23 15:09:31.433023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:15:36.067 [2024-07-23 15:09:31.433041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.067 [2024-07-23 15:09:31.435529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.067 [2024-07-23 15:09:31.435573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.067 pt2 00:15:36.067 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:36.067 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:36.067 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:36.326 [2024-07-23 15:09:31.613034] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:36.326 [2024-07-23 15:09:31.615370] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.326 [2024-07-23 15:09:31.615689] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006c80 00:15:36.326 [2024-07-23 15:09:31.615825] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.326 [2024-07-23 15:09:31.615985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:36.326 [2024-07-23 15:09:31.616398] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006c80 00:15:36.326 [2024-07-23 15:09:31.616452] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000006c80 00:15:36.326 [2024-07-23 15:09:31.616692] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.326 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.585 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.585 "name": "raid_bdev1", 00:15:36.585 "uuid": "d2f429af-aea4-4fd3-8cd2-74211c2a33c4", 00:15:36.585 "strip_size_kb": 64, 00:15:36.585 "state": "online", 00:15:36.585 "raid_level": "concat", 00:15:36.585 "superblock": true, 00:15:36.585 "num_base_bdevs": 2, 00:15:36.585 "num_base_bdevs_discovered": 2, 00:15:36.585 "num_base_bdevs_operational": 2, 00:15:36.585 "base_bdevs_list": [ 00:15:36.585 { 00:15:36.585 "name": "pt1", 00:15:36.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.585 "is_configured": true, 00:15:36.585 "data_offset": 2048, 00:15:36.585 "data_size": 63488 00:15:36.585 }, 00:15:36.585 { 00:15:36.585 "name": "pt2", 00:15:36.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.585 "is_configured": true, 00:15:36.585 "data_offset": 2048, 00:15:36.585 "data_size": 63488 00:15:36.585 } 00:15:36.585 ] 00:15:36.585 }' 00:15:36.585 15:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.585 15:09:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:36.844 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:36.844 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:36.844 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:36.844 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:36.844 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:36.844 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:36.844 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:37.102 [2024-07-23 15:09:32.389560] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.102 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:37.102 "name": "raid_bdev1", 00:15:37.102 "aliases": [ 00:15:37.103 "d2f429af-aea4-4fd3-8cd2-74211c2a33c4" 00:15:37.103 ], 00:15:37.103 "product_name": "Raid Volume", 00:15:37.103 "block_size": 512, 00:15:37.103 "num_blocks": 126976, 00:15:37.103 "uuid": "d2f429af-aea4-4fd3-8cd2-74211c2a33c4", 00:15:37.103 "assigned_rate_limits": { 00:15:37.103 "rw_ios_per_sec": 0, 00:15:37.103 "rw_mbytes_per_sec": 0, 00:15:37.103 "r_mbytes_per_sec": 0, 00:15:37.103 "w_mbytes_per_sec": 0 00:15:37.103 }, 00:15:37.103 "claimed": false, 00:15:37.103 "zoned": false, 00:15:37.103 "supported_io_types": { 00:15:37.103 "read": true, 00:15:37.103 "write": true, 00:15:37.103 "unmap": true, 00:15:37.103 "flush": true, 00:15:37.103 "reset": true, 00:15:37.103 "nvme_admin": false, 00:15:37.103 "nvme_io": false, 00:15:37.103 "nvme_io_md": false, 00:15:37.103 "write_zeroes": true, 00:15:37.103 "zcopy": false, 00:15:37.103 "get_zone_info": false, 00:15:37.103 "zone_management": false, 00:15:37.103 "zone_append": false, 00:15:37.103 "compare": false, 00:15:37.103 "compare_and_write": false, 00:15:37.103 "abort": false, 00:15:37.103 "seek_hole": false, 00:15:37.103 "seek_data": false, 00:15:37.103 "copy": false, 00:15:37.103 "nvme_iov_md": false 00:15:37.103 }, 00:15:37.103 "memory_domains": [ 00:15:37.103 { 00:15:37.103 "dma_device_id": "system", 00:15:37.103 "dma_device_type": 1 00:15:37.103 }, 00:15:37.103 { 00:15:37.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.103 "dma_device_type": 2 00:15:37.103 }, 00:15:37.103 { 00:15:37.103 "dma_device_id": "system", 00:15:37.103 "dma_device_type": 1 00:15:37.103 }, 00:15:37.103 { 00:15:37.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.103 "dma_device_type": 2 00:15:37.103 } 00:15:37.103 ], 00:15:37.103 "driver_specific": { 00:15:37.103 "raid": { 00:15:37.103 "uuid": "d2f429af-aea4-4fd3-8cd2-74211c2a33c4", 00:15:37.103 "strip_size_kb": 64, 00:15:37.103 "state": "online", 00:15:37.103 "raid_level": "concat", 00:15:37.103 "superblock": true, 00:15:37.103 "num_base_bdevs": 2, 00:15:37.103 "num_base_bdevs_discovered": 2, 00:15:37.103 "num_base_bdevs_operational": 2, 00:15:37.103 "base_bdevs_list": [ 00:15:37.103 { 00:15:37.103 "name": "pt1", 00:15:37.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.103 "is_configured": true, 00:15:37.103 "data_offset": 2048, 00:15:37.103 "data_size": 63488 00:15:37.103 }, 00:15:37.103 { 00:15:37.103 "name": "pt2", 00:15:37.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.103 "is_configured": true, 00:15:37.103 "data_offset": 2048, 00:15:37.103 "data_size": 63488 00:15:37.103 } 00:15:37.103 ] 00:15:37.103 } 00:15:37.103 } 00:15:37.103 }' 00:15:37.103 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.103 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:37.103 pt2' 00:15:37.103 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.103 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:37.103 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.362 "name": "pt1", 00:15:37.362 "aliases": [ 00:15:37.362 "00000000-0000-0000-0000-000000000001" 00:15:37.362 ], 00:15:37.362 "product_name": "passthru", 00:15:37.362 "block_size": 512, 00:15:37.362 "num_blocks": 65536, 00:15:37.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.362 "assigned_rate_limits": { 00:15:37.362 "rw_ios_per_sec": 0, 00:15:37.362 "rw_mbytes_per_sec": 0, 00:15:37.362 "r_mbytes_per_sec": 0, 00:15:37.362 "w_mbytes_per_sec": 0 00:15:37.362 }, 00:15:37.362 "claimed": true, 00:15:37.362 "claim_type": "exclusive_write", 00:15:37.362 "zoned": false, 00:15:37.362 "supported_io_types": { 00:15:37.362 "read": true, 00:15:37.362 "write": true, 00:15:37.362 "unmap": true, 00:15:37.362 "flush": true, 00:15:37.362 "reset": true, 00:15:37.362 "nvme_admin": false, 00:15:37.362 "nvme_io": false, 00:15:37.362 "nvme_io_md": false, 00:15:37.362 "write_zeroes": true, 00:15:37.362 "zcopy": true, 00:15:37.362 "get_zone_info": false, 00:15:37.362 "zone_management": false, 00:15:37.362 "zone_append": false, 00:15:37.362 "compare": false, 00:15:37.362 "compare_and_write": false, 00:15:37.362 "abort": true, 00:15:37.362 "seek_hole": false, 00:15:37.362 "seek_data": false, 00:15:37.362 "copy": true, 00:15:37.362 "nvme_iov_md": false 00:15:37.362 }, 00:15:37.362 "memory_domains": [ 00:15:37.362 { 00:15:37.362 "dma_device_id": "system", 00:15:37.362 "dma_device_type": 1 00:15:37.362 }, 00:15:37.362 { 00:15:37.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.362 "dma_device_type": 2 00:15:37.362 } 00:15:37.362 ], 00:15:37.362 "driver_specific": { 00:15:37.362 "passthru": { 00:15:37.362 "name": "pt1", 00:15:37.362 "base_bdev_name": "malloc1" 00:15:37.362 } 00:15:37.362 } 00:15:37.362 }' 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.362 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.621 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:37.621 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.621 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:37.621 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.621 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.621 "name": "pt2", 00:15:37.621 "aliases": [ 00:15:37.621 "00000000-0000-0000-0000-000000000002" 00:15:37.621 ], 00:15:37.621 "product_name": "passthru", 00:15:37.621 "block_size": 512, 00:15:37.621 "num_blocks": 65536, 00:15:37.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.621 "assigned_rate_limits": { 00:15:37.621 "rw_ios_per_sec": 0, 00:15:37.621 "rw_mbytes_per_sec": 0, 00:15:37.621 "r_mbytes_per_sec": 0, 00:15:37.621 "w_mbytes_per_sec": 0 00:15:37.621 }, 00:15:37.621 "claimed": true, 00:15:37.621 "claim_type": "exclusive_write", 00:15:37.621 "zoned": false, 00:15:37.621 "supported_io_types": { 00:15:37.621 "read": true, 00:15:37.621 "write": true, 00:15:37.621 "unmap": true, 00:15:37.621 "flush": true, 00:15:37.621 "reset": true, 00:15:37.621 "nvme_admin": false, 00:15:37.621 "nvme_io": false, 00:15:37.621 "nvme_io_md": false, 00:15:37.621 "write_zeroes": true, 00:15:37.621 "zcopy": true, 00:15:37.621 "get_zone_info": false, 00:15:37.621 "zone_management": false, 00:15:37.621 "zone_append": false, 00:15:37.621 "compare": false, 00:15:37.621 "compare_and_write": false, 00:15:37.621 "abort": true, 00:15:37.621 "seek_hole": false, 00:15:37.621 "seek_data": false, 00:15:37.621 "copy": true, 00:15:37.621 "nvme_iov_md": false 00:15:37.621 }, 00:15:37.621 "memory_domains": [ 00:15:37.621 { 00:15:37.621 "dma_device_id": "system", 00:15:37.621 "dma_device_type": 1 00:15:37.621 }, 00:15:37.621 { 00:15:37.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.621 "dma_device_type": 2 00:15:37.621 } 00:15:37.621 ], 00:15:37.621 "driver_specific": { 00:15:37.621 "passthru": { 00:15:37.621 "name": "pt2", 00:15:37.621 "base_bdev_name": "malloc2" 00:15:37.621 } 00:15:37.621 } 00:15:37.621 }' 00:15:37.621 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.621 15:09:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.621 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.621 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.621 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.621 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.621 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.621 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.880 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.880 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.880 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.880 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:37.880 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:37.880 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:38.143 [2024-07-23 15:09:33.325694] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.143 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d2f429af-aea4-4fd3-8cd2-74211c2a33c4 00:15:38.143 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d2f429af-aea4-4fd3-8cd2-74211c2a33c4 ']' 00:15:38.143 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:38.143 [2024-07-23 15:09:33.509468] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.143 [2024-07-23 15:09:33.509513] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.143 [2024-07-23 15:09:33.509624] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.143 [2024-07-23 15:09:33.509685] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.143 [2024-07-23 15:09:33.509703] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006c80 name raid_bdev1, state offline 00:15:38.143 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.143 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:38.403 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:38.403 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:38.403 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:38.403 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:38.661 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:38.661 15:09:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:38.919 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:38.919 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:39.181 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:39.181 [2024-07-23 15:09:34.553734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:39.181 [2024-07-23 15:09:34.555888] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:39.181 [2024-07-23 15:09:34.555962] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:39.181 [2024-07-23 15:09:34.556029] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:39.181 [2024-07-23 15:09:34.556052] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.181 [2024-07-23 15:09:34.556062] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state configuring 00:15:39.181 request: 00:15:39.181 { 00:15:39.181 "name": "raid_bdev1", 00:15:39.181 "raid_level": "concat", 00:15:39.181 "base_bdevs": [ 00:15:39.181 "malloc1", 00:15:39.181 "malloc2" 00:15:39.181 ], 00:15:39.182 "strip_size_kb": 64, 00:15:39.182 "superblock": false, 00:15:39.182 "method": "bdev_raid_create", 00:15:39.182 "req_id": 1 00:15:39.182 } 00:15:39.182 Got JSON-RPC error response 00:15:39.182 response: 00:15:39.182 { 00:15:39.182 "code": -17, 00:15:39.182 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:39.182 } 00:15:39.182 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:39.182 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:39.182 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:39.182 15:09:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:39.182 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.182 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:39.441 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:39.441 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:39.441 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:39.700 [2024-07-23 15:09:34.965765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:39.700 [2024-07-23 15:09:34.966028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.700 [2024-07-23 15:09:34.966094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:15:39.700 [2024-07-23 15:09:34.966202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.700 [2024-07-23 15:09:34.968876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.700 [2024-07-23 15:09:34.969019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:39.700 [2024-07-23 15:09:34.969243] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:39.700 [2024-07-23 15:09:34.969405] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:39.700 pt1 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.700 15:09:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.959 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.959 "name": "raid_bdev1", 00:15:39.959 "uuid": "d2f429af-aea4-4fd3-8cd2-74211c2a33c4", 00:15:39.959 "strip_size_kb": 64, 00:15:39.959 "state": "configuring", 00:15:39.959 "raid_level": "concat", 00:15:39.959 "superblock": true, 00:15:39.959 "num_base_bdevs": 2, 00:15:39.959 "num_base_bdevs_discovered": 1, 00:15:39.959 "num_base_bdevs_operational": 2, 00:15:39.959 "base_bdevs_list": [ 00:15:39.959 { 00:15:39.959 "name": "pt1", 00:15:39.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.959 "is_configured": true, 00:15:39.959 "data_offset": 2048, 00:15:39.959 "data_size": 63488 00:15:39.959 }, 00:15:39.959 { 00:15:39.959 "name": null, 00:15:39.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.959 "is_configured": false, 00:15:39.959 "data_offset": 2048, 00:15:39.959 "data_size": 63488 00:15:39.959 } 00:15:39.959 ] 00:15:39.959 }' 00:15:39.959 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.959 15:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.218 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:40.218 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:40.218 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:40.218 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.477 [2024-07-23 15:09:35.729934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.477 [2024-07-23 15:09:35.730005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.477 [2024-07-23 15:09:35.730033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:15:40.477 [2024-07-23 15:09:35.730046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.477 [2024-07-23 15:09:35.730460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.477 [2024-07-23 15:09:35.730480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.477 [2024-07-23 15:09:35.730552] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:40.477 [2024-07-23 15:09:35.730574] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.477 [2024-07-23 15:09:35.730690] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:15:40.477 [2024-07-23 15:09:35.730700] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:40.477 [2024-07-23 15:09:35.730778] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:15:40.477 [2024-07-23 15:09:35.731109] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:15:40.477 [2024-07-23 15:09:35.731126] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:15:40.477 [2024-07-23 15:09:35.731225] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.477 pt2 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.477 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.736 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.736 "name": "raid_bdev1", 00:15:40.736 "uuid": "d2f429af-aea4-4fd3-8cd2-74211c2a33c4", 00:15:40.736 "strip_size_kb": 64, 00:15:40.736 "state": "online", 00:15:40.736 "raid_level": "concat", 00:15:40.736 "superblock": true, 00:15:40.736 "num_base_bdevs": 2, 00:15:40.736 "num_base_bdevs_discovered": 2, 00:15:40.736 "num_base_bdevs_operational": 2, 00:15:40.736 "base_bdevs_list": [ 00:15:40.736 { 00:15:40.736 "name": "pt1", 00:15:40.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.736 "is_configured": true, 00:15:40.736 "data_offset": 2048, 00:15:40.736 "data_size": 63488 00:15:40.736 }, 00:15:40.736 { 00:15:40.736 "name": "pt2", 00:15:40.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.736 "is_configured": true, 00:15:40.736 "data_offset": 2048, 00:15:40.736 "data_size": 63488 00:15:40.736 } 00:15:40.736 ] 00:15:40.736 }' 00:15:40.736 15:09:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.736 15:09:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.995 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:40.995 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:40.995 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:40.995 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:40.995 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:40.995 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:40.995 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:40.995 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:41.254 [2024-07-23 15:09:36.426340] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.254 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:41.254 "name": "raid_bdev1", 00:15:41.254 "aliases": [ 00:15:41.254 "d2f429af-aea4-4fd3-8cd2-74211c2a33c4" 00:15:41.254 ], 00:15:41.254 "product_name": "Raid Volume", 00:15:41.254 "block_size": 512, 00:15:41.254 "num_blocks": 126976, 00:15:41.254 "uuid": "d2f429af-aea4-4fd3-8cd2-74211c2a33c4", 00:15:41.254 "assigned_rate_limits": { 00:15:41.254 "rw_ios_per_sec": 0, 00:15:41.254 "rw_mbytes_per_sec": 0, 00:15:41.254 "r_mbytes_per_sec": 0, 00:15:41.254 "w_mbytes_per_sec": 0 00:15:41.254 }, 00:15:41.254 "claimed": false, 00:15:41.254 "zoned": false, 00:15:41.254 "supported_io_types": { 00:15:41.254 "read": true, 00:15:41.254 "write": true, 00:15:41.254 "unmap": true, 00:15:41.254 "flush": true, 00:15:41.254 "reset": true, 00:15:41.254 "nvme_admin": false, 00:15:41.254 "nvme_io": false, 00:15:41.254 "nvme_io_md": false, 00:15:41.254 "write_zeroes": true, 00:15:41.254 "zcopy": false, 00:15:41.254 "get_zone_info": false, 00:15:41.254 "zone_management": false, 00:15:41.254 "zone_append": false, 00:15:41.254 "compare": false, 00:15:41.254 "compare_and_write": false, 00:15:41.254 "abort": false, 00:15:41.254 "seek_hole": false, 00:15:41.254 "seek_data": false, 00:15:41.254 "copy": false, 00:15:41.254 "nvme_iov_md": false 00:15:41.254 }, 00:15:41.254 "memory_domains": [ 00:15:41.254 { 00:15:41.254 "dma_device_id": "system", 00:15:41.254 "dma_device_type": 1 00:15:41.254 }, 00:15:41.254 { 00:15:41.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.254 "dma_device_type": 2 00:15:41.254 }, 00:15:41.254 { 00:15:41.254 "dma_device_id": "system", 00:15:41.254 "dma_device_type": 1 00:15:41.254 }, 00:15:41.254 { 00:15:41.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.254 "dma_device_type": 2 00:15:41.254 } 00:15:41.254 ], 00:15:41.254 "driver_specific": { 00:15:41.254 "raid": { 00:15:41.254 "uuid": "d2f429af-aea4-4fd3-8cd2-74211c2a33c4", 00:15:41.254 "strip_size_kb": 64, 00:15:41.254 "state": "online", 00:15:41.254 "raid_level": "concat", 00:15:41.254 "superblock": true, 00:15:41.254 "num_base_bdevs": 2, 00:15:41.254 "num_base_bdevs_discovered": 2, 00:15:41.254 "num_base_bdevs_operational": 2, 00:15:41.254 "base_bdevs_list": [ 00:15:41.254 { 00:15:41.254 "name": "pt1", 00:15:41.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.254 "is_configured": true, 00:15:41.254 "data_offset": 2048, 00:15:41.254 "data_size": 63488 00:15:41.254 }, 00:15:41.254 { 00:15:41.254 "name": "pt2", 00:15:41.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.254 "is_configured": true, 00:15:41.254 "data_offset": 2048, 00:15:41.254 "data_size": 63488 00:15:41.254 } 00:15:41.254 ] 00:15:41.254 } 00:15:41.254 } 00:15:41.254 }' 00:15:41.254 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.254 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:41.254 pt2' 00:15:41.254 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.254 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:41.254 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.254 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.254 "name": "pt1", 00:15:41.254 "aliases": [ 00:15:41.255 "00000000-0000-0000-0000-000000000001" 00:15:41.255 ], 00:15:41.255 "product_name": "passthru", 00:15:41.255 "block_size": 512, 00:15:41.255 "num_blocks": 65536, 00:15:41.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.255 "assigned_rate_limits": { 00:15:41.255 "rw_ios_per_sec": 0, 00:15:41.255 "rw_mbytes_per_sec": 0, 00:15:41.255 "r_mbytes_per_sec": 0, 00:15:41.255 "w_mbytes_per_sec": 0 00:15:41.255 }, 00:15:41.255 "claimed": true, 00:15:41.255 "claim_type": "exclusive_write", 00:15:41.255 "zoned": false, 00:15:41.255 "supported_io_types": { 00:15:41.255 "read": true, 00:15:41.255 "write": true, 00:15:41.255 "unmap": true, 00:15:41.255 "flush": true, 00:15:41.255 "reset": true, 00:15:41.255 "nvme_admin": false, 00:15:41.255 "nvme_io": false, 00:15:41.255 "nvme_io_md": false, 00:15:41.255 "write_zeroes": true, 00:15:41.255 "zcopy": true, 00:15:41.255 "get_zone_info": false, 00:15:41.255 "zone_management": false, 00:15:41.255 "zone_append": false, 00:15:41.255 "compare": false, 00:15:41.255 "compare_and_write": false, 00:15:41.255 "abort": true, 00:15:41.255 "seek_hole": false, 00:15:41.255 "seek_data": false, 00:15:41.255 "copy": true, 00:15:41.255 "nvme_iov_md": false 00:15:41.255 }, 00:15:41.255 "memory_domains": [ 00:15:41.255 { 00:15:41.255 "dma_device_id": "system", 00:15:41.255 "dma_device_type": 1 00:15:41.255 }, 00:15:41.255 { 00:15:41.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.255 "dma_device_type": 2 00:15:41.255 } 00:15:41.255 ], 00:15:41.255 "driver_specific": { 00:15:41.255 "passthru": { 00:15:41.255 "name": "pt1", 00:15:41.255 "base_bdev_name": "malloc1" 00:15:41.255 } 00:15:41.255 } 00:15:41.255 }' 00:15:41.255 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.255 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.255 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.255 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.255 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.255 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.255 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.514 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.514 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.514 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.514 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.514 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.514 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.514 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:41.514 15:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.773 "name": "pt2", 00:15:41.773 "aliases": [ 00:15:41.773 "00000000-0000-0000-0000-000000000002" 00:15:41.773 ], 00:15:41.773 "product_name": "passthru", 00:15:41.773 "block_size": 512, 00:15:41.773 "num_blocks": 65536, 00:15:41.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.773 "assigned_rate_limits": { 00:15:41.773 "rw_ios_per_sec": 0, 00:15:41.773 "rw_mbytes_per_sec": 0, 00:15:41.773 "r_mbytes_per_sec": 0, 00:15:41.773 "w_mbytes_per_sec": 0 00:15:41.773 }, 00:15:41.773 "claimed": true, 00:15:41.773 "claim_type": "exclusive_write", 00:15:41.773 "zoned": false, 00:15:41.773 "supported_io_types": { 00:15:41.773 "read": true, 00:15:41.773 "write": true, 00:15:41.773 "unmap": true, 00:15:41.773 "flush": true, 00:15:41.773 "reset": true, 00:15:41.773 "nvme_admin": false, 00:15:41.773 "nvme_io": false, 00:15:41.773 "nvme_io_md": false, 00:15:41.773 "write_zeroes": true, 00:15:41.773 "zcopy": true, 00:15:41.773 "get_zone_info": false, 00:15:41.773 "zone_management": false, 00:15:41.773 "zone_append": false, 00:15:41.773 "compare": false, 00:15:41.773 "compare_and_write": false, 00:15:41.773 "abort": true, 00:15:41.773 "seek_hole": false, 00:15:41.773 "seek_data": false, 00:15:41.773 "copy": true, 00:15:41.773 "nvme_iov_md": false 00:15:41.773 }, 00:15:41.773 "memory_domains": [ 00:15:41.773 { 00:15:41.773 "dma_device_id": "system", 00:15:41.773 "dma_device_type": 1 00:15:41.773 }, 00:15:41.773 { 00:15:41.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.773 "dma_device_type": 2 00:15:41.773 } 00:15:41.773 ], 00:15:41.773 "driver_specific": { 00:15:41.773 "passthru": { 00:15:41.773 "name": "pt2", 00:15:41.773 "base_bdev_name": "malloc2" 00:15:41.773 } 00:15:41.773 } 00:15:41.773 }' 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:41.773 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:42.033 [2024-07-23 15:09:37.334538] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d2f429af-aea4-4fd3-8cd2-74211c2a33c4 '!=' d2f429af-aea4-4fd3-8cd2-74211c2a33c4 ']' 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 89253 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 89253 ']' 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 89253 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89253 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:42.033 killing process with pid 89253 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89253' 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 89253 00:15:42.033 [2024-07-23 15:09:37.389534] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.033 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 89253 00:15:42.033 [2024-07-23 15:09:37.389630] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.033 [2024-07-23 15:09:37.389695] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.033 [2024-07-23 15:09:37.389706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:15:42.033 [2024-07-23 15:09:37.413650] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.291 15:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:42.291 00:15:42.291 real 0m8.049s 00:15:42.291 user 0m13.528s 00:15:42.291 sys 0m1.729s 00:15:42.291 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:42.291 15:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.291 ************************************ 00:15:42.291 END TEST raid_superblock_test 00:15:42.291 ************************************ 00:15:42.291 15:09:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:42.291 15:09:37 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:42.291 15:09:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:42.291 15:09:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.291 15:09:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.291 ************************************ 00:15:42.291 START TEST raid_read_error_test 00:15:42.291 ************************************ 00:15:42.291 15:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:15:42.291 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:42.291 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:42.291 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:42.291 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:42.291 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:42.291 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:42.291 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.60ONhQbw5j 00:15:42.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=89570 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 89570 /var/tmp/spdk-raid.sock 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 89570 ']' 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.550 15:09:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.550 [2024-07-23 15:09:37.800831] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:42.551 [2024-07-23 15:09:37.801049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89570 ] 00:15:42.551 [2024-07-23 15:09:37.954304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.809 [2024-07-23 15:09:38.000545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.809 [2024-07-23 15:09:38.045626] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.376 15:09:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.376 15:09:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:43.376 15:09:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:43.376 15:09:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:43.634 BaseBdev1_malloc 00:15:43.634 15:09:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:43.893 true 00:15:43.893 15:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:44.151 [2024-07-23 15:09:39.349078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:44.151 [2024-07-23 15:09:39.349168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.151 [2024-07-23 15:09:39.349202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:15:44.151 [2024-07-23 15:09:39.349215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.151 [2024-07-23 15:09:39.351857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.151 [2024-07-23 15:09:39.351898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:44.151 BaseBdev1 00:15:44.151 15:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:44.151 15:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:44.151 BaseBdev2_malloc 00:15:44.151 15:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:44.409 true 00:15:44.409 15:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:44.667 [2024-07-23 15:09:39.890647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:44.667 [2024-07-23 15:09:39.890901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.667 [2024-07-23 15:09:39.890972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:15:44.667 [2024-07-23 15:09:39.891067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.667 [2024-07-23 15:09:39.893763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.667 [2024-07-23 15:09:39.893916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:44.667 BaseBdev2 00:15:44.667 15:09:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:44.667 [2024-07-23 15:09:40.070917] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.667 [2024-07-23 15:09:40.074653] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.667 [2024-07-23 15:09:40.075227] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:15:44.667 [2024-07-23 15:09:40.075352] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:44.667 [2024-07-23 15:09:40.075528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:44.668 [2024-07-23 15:09:40.076020] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:15:44.668 [2024-07-23 15:09:40.076043] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007280 00:15:44.668 [2024-07-23 15:09:40.076194] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.668 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.926 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.926 "name": "raid_bdev1", 00:15:44.926 "uuid": "1dd8ec31-06c1-4756-95ae-62a875e26dbe", 00:15:44.926 "strip_size_kb": 64, 00:15:44.926 "state": "online", 00:15:44.926 "raid_level": "concat", 00:15:44.926 "superblock": true, 00:15:44.926 "num_base_bdevs": 2, 00:15:44.926 "num_base_bdevs_discovered": 2, 00:15:44.926 "num_base_bdevs_operational": 2, 00:15:44.926 "base_bdevs_list": [ 00:15:44.926 { 00:15:44.926 "name": "BaseBdev1", 00:15:44.926 "uuid": "2907a10d-cc7b-57bc-ac5d-21b09a1e0d8b", 00:15:44.926 "is_configured": true, 00:15:44.926 "data_offset": 2048, 00:15:44.926 "data_size": 63488 00:15:44.926 }, 00:15:44.926 { 00:15:44.926 "name": "BaseBdev2", 00:15:44.926 "uuid": "445e724f-31bd-563a-af10-35399f99de8e", 00:15:44.926 "is_configured": true, 00:15:44.926 "data_offset": 2048, 00:15:44.926 "data_size": 63488 00:15:44.926 } 00:15:44.926 ] 00:15:44.926 }' 00:15:44.926 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.926 15:09:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.185 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:45.185 15:09:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:45.444 [2024-07-23 15:09:40.667744] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:15:46.381 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.639 15:09:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.911 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.911 "name": "raid_bdev1", 00:15:46.911 "uuid": "1dd8ec31-06c1-4756-95ae-62a875e26dbe", 00:15:46.911 "strip_size_kb": 64, 00:15:46.911 "state": "online", 00:15:46.911 "raid_level": "concat", 00:15:46.911 "superblock": true, 00:15:46.911 "num_base_bdevs": 2, 00:15:46.911 "num_base_bdevs_discovered": 2, 00:15:46.911 "num_base_bdevs_operational": 2, 00:15:46.911 "base_bdevs_list": [ 00:15:46.911 { 00:15:46.911 "name": "BaseBdev1", 00:15:46.911 "uuid": "2907a10d-cc7b-57bc-ac5d-21b09a1e0d8b", 00:15:46.911 "is_configured": true, 00:15:46.911 "data_offset": 2048, 00:15:46.911 "data_size": 63488 00:15:46.911 }, 00:15:46.911 { 00:15:46.911 "name": "BaseBdev2", 00:15:46.911 "uuid": "445e724f-31bd-563a-af10-35399f99de8e", 00:15:46.911 "is_configured": true, 00:15:46.911 "data_offset": 2048, 00:15:46.911 "data_size": 63488 00:15:46.911 } 00:15:46.911 ] 00:15:46.911 }' 00:15:46.911 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.911 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.180 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:47.438 [2024-07-23 15:09:42.658108] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.438 [2024-07-23 15:09:42.658361] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.438 [2024-07-23 15:09:42.660800] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.438 [2024-07-23 15:09:42.660842] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.438 [2024-07-23 15:09:42.660879] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.438 [2024-07-23 15:09:42.660890] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state offline 00:15:47.438 0 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 89570 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 89570 ']' 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 89570 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89570 00:15:47.438 killing process with pid 89570 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89570' 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 89570 00:15:47.438 [2024-07-23 15:09:42.711487] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.438 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 89570 00:15:47.438 [2024-07-23 15:09:42.726550] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.60ONhQbw5j 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:47.698 ************************************ 00:15:47.698 END TEST raid_read_error_test 00:15:47.698 ************************************ 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:15:47.698 00:15:47.698 real 0m5.264s 00:15:47.698 user 0m7.763s 00:15:47.698 sys 0m0.956s 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.698 15:09:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.698 15:09:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:47.698 15:09:43 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:47.698 15:09:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:47.698 15:09:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.698 15:09:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.698 ************************************ 00:15:47.698 START TEST raid_write_error_test 00:15:47.698 ************************************ 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0hb7vy9v5g 00:15:47.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=89730 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 89730 /var/tmp/spdk-raid.sock 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 89730 ']' 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.698 15:09:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.957 [2024-07-23 15:09:43.132396] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:47.957 [2024-07-23 15:09:43.132877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89730 ] 00:15:47.957 [2024-07-23 15:09:43.286376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.957 [2024-07-23 15:09:43.331519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.957 [2024-07-23 15:09:43.376055] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.889 15:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.889 15:09:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:48.889 15:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:48.889 15:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:48.889 BaseBdev1_malloc 00:15:48.889 15:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:49.147 true 00:15:49.147 15:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:49.405 [2024-07-23 15:09:44.699497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:49.405 [2024-07-23 15:09:44.699583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.405 [2024-07-23 15:09:44.699630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:15:49.405 [2024-07-23 15:09:44.699644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.405 [2024-07-23 15:09:44.702391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.405 [2024-07-23 15:09:44.702539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:49.405 BaseBdev1 00:15:49.405 15:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:49.405 15:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:49.663 BaseBdev2_malloc 00:15:49.663 15:09:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:49.920 true 00:15:49.920 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:50.178 [2024-07-23 15:09:45.397022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:50.178 [2024-07-23 15:09:45.397107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.178 [2024-07-23 15:09:45.397140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:15:50.178 [2024-07-23 15:09:45.397153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.178 [2024-07-23 15:09:45.399851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.178 [2024-07-23 15:09:45.399893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:50.178 BaseBdev2 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:50.178 [2024-07-23 15:09:45.573113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.178 [2024-07-23 15:09:45.575377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.178 [2024-07-23 15:09:45.575588] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:15:50.178 [2024-07-23 15:09:45.575608] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:50.178 [2024-07-23 15:09:45.575730] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:50.178 [2024-07-23 15:09:45.576227] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:15:50.178 [2024-07-23 15:09:45.576345] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007280 00:15:50.178 [2024-07-23 15:09:45.576614] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.178 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.437 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.437 "name": "raid_bdev1", 00:15:50.437 "uuid": "63f612d7-d2cd-4f89-8181-cf481f45a71b", 00:15:50.437 "strip_size_kb": 64, 00:15:50.437 "state": "online", 00:15:50.437 "raid_level": "concat", 00:15:50.437 "superblock": true, 00:15:50.437 "num_base_bdevs": 2, 00:15:50.437 "num_base_bdevs_discovered": 2, 00:15:50.437 "num_base_bdevs_operational": 2, 00:15:50.437 "base_bdevs_list": [ 00:15:50.437 { 00:15:50.437 "name": "BaseBdev1", 00:15:50.437 "uuid": "bae4c0b4-9233-5e0f-af28-3e49d3d1e9c0", 00:15:50.437 "is_configured": true, 00:15:50.437 "data_offset": 2048, 00:15:50.437 "data_size": 63488 00:15:50.437 }, 00:15:50.437 { 00:15:50.437 "name": "BaseBdev2", 00:15:50.437 "uuid": "32d0fdf9-0b91-5bde-b76d-097da9a1571a", 00:15:50.437 "is_configured": true, 00:15:50.437 "data_offset": 2048, 00:15:50.437 "data_size": 63488 00:15:50.437 } 00:15:50.437 ] 00:15:50.437 }' 00:15:50.437 15:09:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.437 15:09:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.694 15:09:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:50.694 15:09:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:50.694 [2024-07-23 15:09:46.121595] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:15:51.629 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.197 "name": "raid_bdev1", 00:15:52.197 "uuid": "63f612d7-d2cd-4f89-8181-cf481f45a71b", 00:15:52.197 "strip_size_kb": 64, 00:15:52.197 "state": "online", 00:15:52.197 "raid_level": "concat", 00:15:52.197 "superblock": true, 00:15:52.197 "num_base_bdevs": 2, 00:15:52.197 "num_base_bdevs_discovered": 2, 00:15:52.197 "num_base_bdevs_operational": 2, 00:15:52.197 "base_bdevs_list": [ 00:15:52.197 { 00:15:52.197 "name": "BaseBdev1", 00:15:52.197 "uuid": "bae4c0b4-9233-5e0f-af28-3e49d3d1e9c0", 00:15:52.197 "is_configured": true, 00:15:52.197 "data_offset": 2048, 00:15:52.197 "data_size": 63488 00:15:52.197 }, 00:15:52.197 { 00:15:52.197 "name": "BaseBdev2", 00:15:52.197 "uuid": "32d0fdf9-0b91-5bde-b76d-097da9a1571a", 00:15:52.197 "is_configured": true, 00:15:52.197 "data_offset": 2048, 00:15:52.197 "data_size": 63488 00:15:52.197 } 00:15:52.197 ] 00:15:52.197 }' 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.197 15:09:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.457 15:09:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:52.716 [2024-07-23 15:09:48.055596] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.716 [2024-07-23 15:09:48.055860] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.716 0 00:15:52.716 [2024-07-23 15:09:48.058326] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.716 [2024-07-23 15:09:48.058369] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.716 [2024-07-23 15:09:48.058404] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.716 [2024-07-23 15:09:48.058415] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state offline 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 89730 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 89730 ']' 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 89730 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89730 00:15:52.716 killing process with pid 89730 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89730' 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 89730 00:15:52.716 [2024-07-23 15:09:48.108263] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.716 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 89730 00:15:52.716 [2024-07-23 15:09:48.123603] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0hb7vy9v5g 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:52.975 ************************************ 00:15:52.975 END TEST raid_write_error_test 00:15:52.975 ************************************ 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:15:52.975 00:15:52.975 real 0m5.320s 00:15:52.975 user 0m7.904s 00:15:52.975 sys 0m0.929s 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.975 15:09:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.233 15:09:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:53.233 15:09:48 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:53.233 15:09:48 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:53.233 15:09:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:53.233 15:09:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.233 15:09:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.233 ************************************ 00:15:53.233 START TEST raid_state_function_test 00:15:53.233 ************************************ 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:53.233 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:53.234 Process raid pid: 89888 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=89888 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 89888' 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 89888 /var/tmp/spdk-raid.sock 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 89888 ']' 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:53.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.234 15:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.234 [2024-07-23 15:09:48.509708] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:15:53.234 [2024-07-23 15:09:48.509918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.234 [2024-07-23 15:09:48.661221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.492 [2024-07-23 15:09:48.708272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.492 [2024-07-23 15:09:48.752606] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.060 15:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.060 15:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:54.060 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:54.318 [2024-07-23 15:09:49.662181] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.318 [2024-07-23 15:09:49.662252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.318 [2024-07-23 15:09:49.662264] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.318 [2024-07-23 15:09:49.662278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.318 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.596 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.596 "name": "Existed_Raid", 00:15:54.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.596 "strip_size_kb": 0, 00:15:54.596 "state": "configuring", 00:15:54.596 "raid_level": "raid1", 00:15:54.596 "superblock": false, 00:15:54.596 "num_base_bdevs": 2, 00:15:54.596 "num_base_bdevs_discovered": 0, 00:15:54.596 "num_base_bdevs_operational": 2, 00:15:54.596 "base_bdevs_list": [ 00:15:54.596 { 00:15:54.596 "name": "BaseBdev1", 00:15:54.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.596 "is_configured": false, 00:15:54.596 "data_offset": 0, 00:15:54.596 "data_size": 0 00:15:54.596 }, 00:15:54.596 { 00:15:54.596 "name": "BaseBdev2", 00:15:54.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.596 "is_configured": false, 00:15:54.596 "data_offset": 0, 00:15:54.596 "data_size": 0 00:15:54.596 } 00:15:54.596 ] 00:15:54.596 }' 00:15:54.596 15:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.596 15:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.855 15:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:55.113 [2024-07-23 15:09:50.366230] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.113 [2024-07-23 15:09:50.366448] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:15:55.113 15:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:55.371 [2024-07-23 15:09:50.578297] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.371 [2024-07-23 15:09:50.578518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.371 [2024-07-23 15:09:50.578540] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.371 [2024-07-23 15:09:50.578555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.371 15:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:55.371 [2024-07-23 15:09:50.768151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.371 BaseBdev1 00:15:55.371 15:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:55.371 15:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:55.371 15:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:55.371 15:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:55.371 15:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:55.371 15:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:55.371 15:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.629 15:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.888 [ 00:15:55.888 { 00:15:55.888 "name": "BaseBdev1", 00:15:55.888 "aliases": [ 00:15:55.888 "86d1a619-0078-407a-9f34-133ac35efc63" 00:15:55.888 ], 00:15:55.888 "product_name": "Malloc disk", 00:15:55.888 "block_size": 512, 00:15:55.888 "num_blocks": 65536, 00:15:55.888 "uuid": "86d1a619-0078-407a-9f34-133ac35efc63", 00:15:55.888 "assigned_rate_limits": { 00:15:55.888 "rw_ios_per_sec": 0, 00:15:55.888 "rw_mbytes_per_sec": 0, 00:15:55.888 "r_mbytes_per_sec": 0, 00:15:55.888 "w_mbytes_per_sec": 0 00:15:55.888 }, 00:15:55.888 "claimed": true, 00:15:55.888 "claim_type": "exclusive_write", 00:15:55.888 "zoned": false, 00:15:55.888 "supported_io_types": { 00:15:55.888 "read": true, 00:15:55.888 "write": true, 00:15:55.888 "unmap": true, 00:15:55.888 "flush": true, 00:15:55.888 "reset": true, 00:15:55.888 "nvme_admin": false, 00:15:55.888 "nvme_io": false, 00:15:55.888 "nvme_io_md": false, 00:15:55.888 "write_zeroes": true, 00:15:55.888 "zcopy": true, 00:15:55.888 "get_zone_info": false, 00:15:55.888 "zone_management": false, 00:15:55.888 "zone_append": false, 00:15:55.888 "compare": false, 00:15:55.888 "compare_and_write": false, 00:15:55.888 "abort": true, 00:15:55.888 "seek_hole": false, 00:15:55.888 "seek_data": false, 00:15:55.888 "copy": true, 00:15:55.888 "nvme_iov_md": false 00:15:55.888 }, 00:15:55.888 "memory_domains": [ 00:15:55.888 { 00:15:55.888 "dma_device_id": "system", 00:15:55.888 "dma_device_type": 1 00:15:55.888 }, 00:15:55.888 { 00:15:55.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.888 "dma_device_type": 2 00:15:55.888 } 00:15:55.888 ], 00:15:55.888 "driver_specific": {} 00:15:55.888 } 00:15:55.888 ] 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.888 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.147 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.147 "name": "Existed_Raid", 00:15:56.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.147 "strip_size_kb": 0, 00:15:56.147 "state": "configuring", 00:15:56.147 "raid_level": "raid1", 00:15:56.147 "superblock": false, 00:15:56.147 "num_base_bdevs": 2, 00:15:56.147 "num_base_bdevs_discovered": 1, 00:15:56.147 "num_base_bdevs_operational": 2, 00:15:56.147 "base_bdevs_list": [ 00:15:56.147 { 00:15:56.147 "name": "BaseBdev1", 00:15:56.147 "uuid": "86d1a619-0078-407a-9f34-133ac35efc63", 00:15:56.147 "is_configured": true, 00:15:56.147 "data_offset": 0, 00:15:56.147 "data_size": 65536 00:15:56.147 }, 00:15:56.147 { 00:15:56.147 "name": "BaseBdev2", 00:15:56.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.147 "is_configured": false, 00:15:56.147 "data_offset": 0, 00:15:56.147 "data_size": 0 00:15:56.147 } 00:15:56.147 ] 00:15:56.147 }' 00:15:56.147 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.147 15:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.405 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:56.664 [2024-07-23 15:09:51.872501] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.664 [2024-07-23 15:09:51.872760] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:15:56.664 15:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:56.664 [2024-07-23 15:09:52.052609] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.664 [2024-07-23 15:09:52.055102] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.664 [2024-07-23 15:09:52.055159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.664 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.922 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.922 "name": "Existed_Raid", 00:15:56.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.922 "strip_size_kb": 0, 00:15:56.922 "state": "configuring", 00:15:56.922 "raid_level": "raid1", 00:15:56.922 "superblock": false, 00:15:56.922 "num_base_bdevs": 2, 00:15:56.922 "num_base_bdevs_discovered": 1, 00:15:56.922 "num_base_bdevs_operational": 2, 00:15:56.922 "base_bdevs_list": [ 00:15:56.922 { 00:15:56.922 "name": "BaseBdev1", 00:15:56.922 "uuid": "86d1a619-0078-407a-9f34-133ac35efc63", 00:15:56.922 "is_configured": true, 00:15:56.922 "data_offset": 0, 00:15:56.922 "data_size": 65536 00:15:56.922 }, 00:15:56.922 { 00:15:56.922 "name": "BaseBdev2", 00:15:56.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.922 "is_configured": false, 00:15:56.922 "data_offset": 0, 00:15:56.922 "data_size": 0 00:15:56.922 } 00:15:56.922 ] 00:15:56.922 }' 00:15:56.922 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.922 15:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.181 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:57.439 [2024-07-23 15:09:52.841782] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.439 [2024-07-23 15:09:52.842143] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:15:57.439 [2024-07-23 15:09:52.842221] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:57.439 [2024-07-23 15:09:52.842547] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:15:57.439 [2024-07-23 15:09:52.843178] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:15:57.439 [2024-07-23 15:09:52.843352] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:15:57.439 [2024-07-23 15:09:52.843778] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.439 BaseBdev2 00:15:57.439 15:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:57.439 15:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:57.439 15:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:57.439 15:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:57.439 15:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:57.439 15:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:57.439 15:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.698 15:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:57.956 [ 00:15:57.956 { 00:15:57.956 "name": "BaseBdev2", 00:15:57.956 "aliases": [ 00:15:57.956 "07634489-f8ea-42a0-9acd-1317ef0c9b21" 00:15:57.956 ], 00:15:57.956 "product_name": "Malloc disk", 00:15:57.956 "block_size": 512, 00:15:57.956 "num_blocks": 65536, 00:15:57.956 "uuid": "07634489-f8ea-42a0-9acd-1317ef0c9b21", 00:15:57.956 "assigned_rate_limits": { 00:15:57.956 "rw_ios_per_sec": 0, 00:15:57.957 "rw_mbytes_per_sec": 0, 00:15:57.957 "r_mbytes_per_sec": 0, 00:15:57.957 "w_mbytes_per_sec": 0 00:15:57.957 }, 00:15:57.957 "claimed": true, 00:15:57.957 "claim_type": "exclusive_write", 00:15:57.957 "zoned": false, 00:15:57.957 "supported_io_types": { 00:15:57.957 "read": true, 00:15:57.957 "write": true, 00:15:57.957 "unmap": true, 00:15:57.957 "flush": true, 00:15:57.957 "reset": true, 00:15:57.957 "nvme_admin": false, 00:15:57.957 "nvme_io": false, 00:15:57.957 "nvme_io_md": false, 00:15:57.957 "write_zeroes": true, 00:15:57.957 "zcopy": true, 00:15:57.957 "get_zone_info": false, 00:15:57.957 "zone_management": false, 00:15:57.957 "zone_append": false, 00:15:57.957 "compare": false, 00:15:57.957 "compare_and_write": false, 00:15:57.957 "abort": true, 00:15:57.957 "seek_hole": false, 00:15:57.957 "seek_data": false, 00:15:57.957 "copy": true, 00:15:57.957 "nvme_iov_md": false 00:15:57.957 }, 00:15:57.957 "memory_domains": [ 00:15:57.957 { 00:15:57.957 "dma_device_id": "system", 00:15:57.957 "dma_device_type": 1 00:15:57.957 }, 00:15:57.957 { 00:15:57.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.957 "dma_device_type": 2 00:15:57.957 } 00:15:57.957 ], 00:15:57.957 "driver_specific": {} 00:15:57.957 } 00:15:57.957 ] 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.957 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.215 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.215 "name": "Existed_Raid", 00:15:58.215 "uuid": "84eaebdd-2ef7-46ae-b92b-3c0d5b32f224", 00:15:58.215 "strip_size_kb": 0, 00:15:58.215 "state": "online", 00:15:58.215 "raid_level": "raid1", 00:15:58.215 "superblock": false, 00:15:58.215 "num_base_bdevs": 2, 00:15:58.215 "num_base_bdevs_discovered": 2, 00:15:58.215 "num_base_bdevs_operational": 2, 00:15:58.215 "base_bdevs_list": [ 00:15:58.215 { 00:15:58.215 "name": "BaseBdev1", 00:15:58.215 "uuid": "86d1a619-0078-407a-9f34-133ac35efc63", 00:15:58.215 "is_configured": true, 00:15:58.215 "data_offset": 0, 00:15:58.215 "data_size": 65536 00:15:58.215 }, 00:15:58.215 { 00:15:58.215 "name": "BaseBdev2", 00:15:58.215 "uuid": "07634489-f8ea-42a0-9acd-1317ef0c9b21", 00:15:58.215 "is_configured": true, 00:15:58.215 "data_offset": 0, 00:15:58.215 "data_size": 65536 00:15:58.215 } 00:15:58.215 ] 00:15:58.215 }' 00:15:58.215 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.215 15:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.473 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.473 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:58.473 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:58.473 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:58.473 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:58.473 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:58.473 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:58.473 15:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:58.731 [2024-07-23 15:09:54.050434] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.731 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:58.731 "name": "Existed_Raid", 00:15:58.731 "aliases": [ 00:15:58.731 "84eaebdd-2ef7-46ae-b92b-3c0d5b32f224" 00:15:58.731 ], 00:15:58.731 "product_name": "Raid Volume", 00:15:58.731 "block_size": 512, 00:15:58.731 "num_blocks": 65536, 00:15:58.731 "uuid": "84eaebdd-2ef7-46ae-b92b-3c0d5b32f224", 00:15:58.731 "assigned_rate_limits": { 00:15:58.731 "rw_ios_per_sec": 0, 00:15:58.731 "rw_mbytes_per_sec": 0, 00:15:58.731 "r_mbytes_per_sec": 0, 00:15:58.731 "w_mbytes_per_sec": 0 00:15:58.731 }, 00:15:58.731 "claimed": false, 00:15:58.731 "zoned": false, 00:15:58.731 "supported_io_types": { 00:15:58.731 "read": true, 00:15:58.731 "write": true, 00:15:58.731 "unmap": false, 00:15:58.731 "flush": false, 00:15:58.731 "reset": true, 00:15:58.731 "nvme_admin": false, 00:15:58.731 "nvme_io": false, 00:15:58.731 "nvme_io_md": false, 00:15:58.731 "write_zeroes": true, 00:15:58.731 "zcopy": false, 00:15:58.731 "get_zone_info": false, 00:15:58.731 "zone_management": false, 00:15:58.731 "zone_append": false, 00:15:58.731 "compare": false, 00:15:58.731 "compare_and_write": false, 00:15:58.731 "abort": false, 00:15:58.731 "seek_hole": false, 00:15:58.731 "seek_data": false, 00:15:58.731 "copy": false, 00:15:58.731 "nvme_iov_md": false 00:15:58.731 }, 00:15:58.731 "memory_domains": [ 00:15:58.731 { 00:15:58.731 "dma_device_id": "system", 00:15:58.731 "dma_device_type": 1 00:15:58.731 }, 00:15:58.731 { 00:15:58.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.731 "dma_device_type": 2 00:15:58.731 }, 00:15:58.731 { 00:15:58.731 "dma_device_id": "system", 00:15:58.731 "dma_device_type": 1 00:15:58.731 }, 00:15:58.731 { 00:15:58.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.731 "dma_device_type": 2 00:15:58.731 } 00:15:58.731 ], 00:15:58.731 "driver_specific": { 00:15:58.731 "raid": { 00:15:58.731 "uuid": "84eaebdd-2ef7-46ae-b92b-3c0d5b32f224", 00:15:58.731 "strip_size_kb": 0, 00:15:58.731 "state": "online", 00:15:58.731 "raid_level": "raid1", 00:15:58.731 "superblock": false, 00:15:58.731 "num_base_bdevs": 2, 00:15:58.731 "num_base_bdevs_discovered": 2, 00:15:58.731 "num_base_bdevs_operational": 2, 00:15:58.731 "base_bdevs_list": [ 00:15:58.731 { 00:15:58.731 "name": "BaseBdev1", 00:15:58.731 "uuid": "86d1a619-0078-407a-9f34-133ac35efc63", 00:15:58.731 "is_configured": true, 00:15:58.731 "data_offset": 0, 00:15:58.731 "data_size": 65536 00:15:58.731 }, 00:15:58.731 { 00:15:58.731 "name": "BaseBdev2", 00:15:58.731 "uuid": "07634489-f8ea-42a0-9acd-1317ef0c9b21", 00:15:58.731 "is_configured": true, 00:15:58.731 "data_offset": 0, 00:15:58.731 "data_size": 65536 00:15:58.731 } 00:15:58.731 ] 00:15:58.731 } 00:15:58.731 } 00:15:58.731 }' 00:15:58.731 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.731 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:58.731 BaseBdev2' 00:15:58.731 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:58.731 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:58.731 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.990 "name": "BaseBdev1", 00:15:58.990 "aliases": [ 00:15:58.990 "86d1a619-0078-407a-9f34-133ac35efc63" 00:15:58.990 ], 00:15:58.990 "product_name": "Malloc disk", 00:15:58.990 "block_size": 512, 00:15:58.990 "num_blocks": 65536, 00:15:58.990 "uuid": "86d1a619-0078-407a-9f34-133ac35efc63", 00:15:58.990 "assigned_rate_limits": { 00:15:58.990 "rw_ios_per_sec": 0, 00:15:58.990 "rw_mbytes_per_sec": 0, 00:15:58.990 "r_mbytes_per_sec": 0, 00:15:58.990 "w_mbytes_per_sec": 0 00:15:58.990 }, 00:15:58.990 "claimed": true, 00:15:58.990 "claim_type": "exclusive_write", 00:15:58.990 "zoned": false, 00:15:58.990 "supported_io_types": { 00:15:58.990 "read": true, 00:15:58.990 "write": true, 00:15:58.990 "unmap": true, 00:15:58.990 "flush": true, 00:15:58.990 "reset": true, 00:15:58.990 "nvme_admin": false, 00:15:58.990 "nvme_io": false, 00:15:58.990 "nvme_io_md": false, 00:15:58.990 "write_zeroes": true, 00:15:58.990 "zcopy": true, 00:15:58.990 "get_zone_info": false, 00:15:58.990 "zone_management": false, 00:15:58.990 "zone_append": false, 00:15:58.990 "compare": false, 00:15:58.990 "compare_and_write": false, 00:15:58.990 "abort": true, 00:15:58.990 "seek_hole": false, 00:15:58.990 "seek_data": false, 00:15:58.990 "copy": true, 00:15:58.990 "nvme_iov_md": false 00:15:58.990 }, 00:15:58.990 "memory_domains": [ 00:15:58.990 { 00:15:58.990 "dma_device_id": "system", 00:15:58.990 "dma_device_type": 1 00:15:58.990 }, 00:15:58.990 { 00:15:58.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.990 "dma_device_type": 2 00:15:58.990 } 00:15:58.990 ], 00:15:58.990 "driver_specific": {} 00:15:58.990 }' 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.990 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:59.249 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.249 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.249 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:59.249 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:59.249 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:59.249 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:59.507 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:59.507 "name": "BaseBdev2", 00:15:59.507 "aliases": [ 00:15:59.507 "07634489-f8ea-42a0-9acd-1317ef0c9b21" 00:15:59.507 ], 00:15:59.507 "product_name": "Malloc disk", 00:15:59.507 "block_size": 512, 00:15:59.507 "num_blocks": 65536, 00:15:59.507 "uuid": "07634489-f8ea-42a0-9acd-1317ef0c9b21", 00:15:59.507 "assigned_rate_limits": { 00:15:59.507 "rw_ios_per_sec": 0, 00:15:59.507 "rw_mbytes_per_sec": 0, 00:15:59.507 "r_mbytes_per_sec": 0, 00:15:59.507 "w_mbytes_per_sec": 0 00:15:59.507 }, 00:15:59.507 "claimed": true, 00:15:59.507 "claim_type": "exclusive_write", 00:15:59.507 "zoned": false, 00:15:59.507 "supported_io_types": { 00:15:59.507 "read": true, 00:15:59.507 "write": true, 00:15:59.508 "unmap": true, 00:15:59.508 "flush": true, 00:15:59.508 "reset": true, 00:15:59.508 "nvme_admin": false, 00:15:59.508 "nvme_io": false, 00:15:59.508 "nvme_io_md": false, 00:15:59.508 "write_zeroes": true, 00:15:59.508 "zcopy": true, 00:15:59.508 "get_zone_info": false, 00:15:59.508 "zone_management": false, 00:15:59.508 "zone_append": false, 00:15:59.508 "compare": false, 00:15:59.508 "compare_and_write": false, 00:15:59.508 "abort": true, 00:15:59.508 "seek_hole": false, 00:15:59.508 "seek_data": false, 00:15:59.508 "copy": true, 00:15:59.508 "nvme_iov_md": false 00:15:59.508 }, 00:15:59.508 "memory_domains": [ 00:15:59.508 { 00:15:59.508 "dma_device_id": "system", 00:15:59.508 "dma_device_type": 1 00:15:59.508 }, 00:15:59.508 { 00:15:59.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.508 "dma_device_type": 2 00:15:59.508 } 00:15:59.508 ], 00:15:59.508 "driver_specific": {} 00:15:59.508 }' 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:59.508 15:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:59.766 [2024-07-23 15:09:55.058514] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.766 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.024 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.024 "name": "Existed_Raid", 00:16:00.024 "uuid": "84eaebdd-2ef7-46ae-b92b-3c0d5b32f224", 00:16:00.024 "strip_size_kb": 0, 00:16:00.024 "state": "online", 00:16:00.024 "raid_level": "raid1", 00:16:00.024 "superblock": false, 00:16:00.024 "num_base_bdevs": 2, 00:16:00.024 "num_base_bdevs_discovered": 1, 00:16:00.024 "num_base_bdevs_operational": 1, 00:16:00.024 "base_bdevs_list": [ 00:16:00.024 { 00:16:00.024 "name": null, 00:16:00.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.024 "is_configured": false, 00:16:00.024 "data_offset": 0, 00:16:00.024 "data_size": 65536 00:16:00.024 }, 00:16:00.024 { 00:16:00.024 "name": "BaseBdev2", 00:16:00.024 "uuid": "07634489-f8ea-42a0-9acd-1317ef0c9b21", 00:16:00.024 "is_configured": true, 00:16:00.024 "data_offset": 0, 00:16:00.024 "data_size": 65536 00:16:00.024 } 00:16:00.024 ] 00:16:00.024 }' 00:16:00.024 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.024 15:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.283 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:00.283 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.283 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.283 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:00.540 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:00.540 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.540 15:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:00.798 [2024-07-23 15:09:56.083132] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.798 [2024-07-23 15:09:56.083249] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.798 [2024-07-23 15:09:56.095881] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.798 [2024-07-23 15:09:56.095936] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.798 [2024-07-23 15:09:56.095952] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:16:00.798 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:00.798 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.798 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.798 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 89888 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 89888 ']' 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 89888 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89888 00:16:01.057 killing process with pid 89888 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89888' 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 89888 00:16:01.057 [2024-07-23 15:09:56.334212] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.057 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 89888 00:16:01.057 [2024-07-23 15:09:56.334304] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:01.316 00:16:01.316 real 0m8.141s 00:16:01.316 user 0m13.682s 00:16:01.316 sys 0m1.729s 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.316 ************************************ 00:16:01.316 END TEST raid_state_function_test 00:16:01.316 ************************************ 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.316 15:09:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:01.316 15:09:56 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:01.316 15:09:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:01.316 15:09:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.316 15:09:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.316 ************************************ 00:16:01.316 START TEST raid_state_function_test_sb 00:16:01.316 ************************************ 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:01.316 Process raid pid: 90212 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=90212 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 90212' 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 90212 /var/tmp/spdk-raid.sock 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 90212 ']' 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:01.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:01.316 15:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.317 15:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:01.317 15:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:01.317 15:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.317 15:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.317 [2024-07-23 15:09:56.718747] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:16:01.317 [2024-07-23 15:09:56.718937] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.575 [2024-07-23 15:09:56.870830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.575 [2024-07-23 15:09:56.915829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.575 [2024-07-23 15:09:56.960090] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:02.537 [2024-07-23 15:09:57.817569] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.537 [2024-07-23 15:09:57.817640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.537 [2024-07-23 15:09:57.817652] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.537 [2024-07-23 15:09:57.817684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.537 15:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.795 15:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.795 "name": "Existed_Raid", 00:16:02.795 "uuid": "237349bc-db59-44e0-acfa-b6e5a46cd983", 00:16:02.795 "strip_size_kb": 0, 00:16:02.795 "state": "configuring", 00:16:02.795 "raid_level": "raid1", 00:16:02.795 "superblock": true, 00:16:02.795 "num_base_bdevs": 2, 00:16:02.795 "num_base_bdevs_discovered": 0, 00:16:02.795 "num_base_bdevs_operational": 2, 00:16:02.796 "base_bdevs_list": [ 00:16:02.796 { 00:16:02.796 "name": "BaseBdev1", 00:16:02.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.796 "is_configured": false, 00:16:02.796 "data_offset": 0, 00:16:02.796 "data_size": 0 00:16:02.796 }, 00:16:02.796 { 00:16:02.796 "name": "BaseBdev2", 00:16:02.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.796 "is_configured": false, 00:16:02.796 "data_offset": 0, 00:16:02.796 "data_size": 0 00:16:02.796 } 00:16:02.796 ] 00:16:02.796 }' 00:16:02.796 15:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.796 15:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.053 15:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:03.310 [2024-07-23 15:09:58.509597] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.310 [2024-07-23 15:09:58.509651] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:16:03.310 15:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:03.568 [2024-07-23 15:09:58.773683] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.568 [2024-07-23 15:09:58.773751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.568 [2024-07-23 15:09:58.773762] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.568 [2024-07-23 15:09:58.773777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.568 15:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:03.568 [2024-07-23 15:09:58.959358] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.568 BaseBdev1 00:16:03.568 15:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:03.568 15:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:03.568 15:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.568 15:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:03.568 15:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.568 15:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.568 15:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.826 15:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:04.085 [ 00:16:04.085 { 00:16:04.085 "name": "BaseBdev1", 00:16:04.085 "aliases": [ 00:16:04.085 "ab86ae37-b709-46ff-a5c0-79d34c479b91" 00:16:04.085 ], 00:16:04.085 "product_name": "Malloc disk", 00:16:04.085 "block_size": 512, 00:16:04.085 "num_blocks": 65536, 00:16:04.085 "uuid": "ab86ae37-b709-46ff-a5c0-79d34c479b91", 00:16:04.085 "assigned_rate_limits": { 00:16:04.085 "rw_ios_per_sec": 0, 00:16:04.085 "rw_mbytes_per_sec": 0, 00:16:04.085 "r_mbytes_per_sec": 0, 00:16:04.085 "w_mbytes_per_sec": 0 00:16:04.085 }, 00:16:04.085 "claimed": true, 00:16:04.085 "claim_type": "exclusive_write", 00:16:04.085 "zoned": false, 00:16:04.085 "supported_io_types": { 00:16:04.085 "read": true, 00:16:04.085 "write": true, 00:16:04.085 "unmap": true, 00:16:04.085 "flush": true, 00:16:04.085 "reset": true, 00:16:04.085 "nvme_admin": false, 00:16:04.085 "nvme_io": false, 00:16:04.085 "nvme_io_md": false, 00:16:04.085 "write_zeroes": true, 00:16:04.085 "zcopy": true, 00:16:04.085 "get_zone_info": false, 00:16:04.085 "zone_management": false, 00:16:04.085 "zone_append": false, 00:16:04.085 "compare": false, 00:16:04.085 "compare_and_write": false, 00:16:04.085 "abort": true, 00:16:04.085 "seek_hole": false, 00:16:04.085 "seek_data": false, 00:16:04.085 "copy": true, 00:16:04.085 "nvme_iov_md": false 00:16:04.085 }, 00:16:04.085 "memory_domains": [ 00:16:04.085 { 00:16:04.085 "dma_device_id": "system", 00:16:04.085 "dma_device_type": 1 00:16:04.085 }, 00:16:04.085 { 00:16:04.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.085 "dma_device_type": 2 00:16:04.085 } 00:16:04.085 ], 00:16:04.085 "driver_specific": {} 00:16:04.085 } 00:16:04.085 ] 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.085 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.344 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.344 "name": "Existed_Raid", 00:16:04.344 "uuid": "a975d4f6-a1b6-4a84-87c4-464b7804cde0", 00:16:04.344 "strip_size_kb": 0, 00:16:04.344 "state": "configuring", 00:16:04.344 "raid_level": "raid1", 00:16:04.344 "superblock": true, 00:16:04.344 "num_base_bdevs": 2, 00:16:04.344 "num_base_bdevs_discovered": 1, 00:16:04.344 "num_base_bdevs_operational": 2, 00:16:04.344 "base_bdevs_list": [ 00:16:04.344 { 00:16:04.344 "name": "BaseBdev1", 00:16:04.344 "uuid": "ab86ae37-b709-46ff-a5c0-79d34c479b91", 00:16:04.344 "is_configured": true, 00:16:04.344 "data_offset": 2048, 00:16:04.344 "data_size": 63488 00:16:04.344 }, 00:16:04.344 { 00:16:04.344 "name": "BaseBdev2", 00:16:04.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.344 "is_configured": false, 00:16:04.344 "data_offset": 0, 00:16:04.344 "data_size": 0 00:16:04.344 } 00:16:04.344 ] 00:16:04.344 }' 00:16:04.344 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.344 15:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.602 15:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:04.859 [2024-07-23 15:10:00.163722] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.859 [2024-07-23 15:10:00.164015] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:16:04.859 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:05.117 [2024-07-23 15:10:00.343906] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.117 [2024-07-23 15:10:00.346244] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.117 [2024-07-23 15:10:00.346295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.117 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.118 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.118 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.118 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.118 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.376 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.376 "name": "Existed_Raid", 00:16:05.376 "uuid": "3d0efef8-cae0-456d-96d5-3aeb146e332c", 00:16:05.376 "strip_size_kb": 0, 00:16:05.376 "state": "configuring", 00:16:05.376 "raid_level": "raid1", 00:16:05.376 "superblock": true, 00:16:05.376 "num_base_bdevs": 2, 00:16:05.376 "num_base_bdevs_discovered": 1, 00:16:05.376 "num_base_bdevs_operational": 2, 00:16:05.376 "base_bdevs_list": [ 00:16:05.376 { 00:16:05.376 "name": "BaseBdev1", 00:16:05.376 "uuid": "ab86ae37-b709-46ff-a5c0-79d34c479b91", 00:16:05.376 "is_configured": true, 00:16:05.376 "data_offset": 2048, 00:16:05.376 "data_size": 63488 00:16:05.376 }, 00:16:05.376 { 00:16:05.376 "name": "BaseBdev2", 00:16:05.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.376 "is_configured": false, 00:16:05.376 "data_offset": 0, 00:16:05.376 "data_size": 0 00:16:05.376 } 00:16:05.376 ] 00:16:05.376 }' 00:16:05.376 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.376 15:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.635 15:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.893 [2024-07-23 15:10:01.137746] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.893 [2024-07-23 15:10:01.138340] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:16:05.893 [2024-07-23 15:10:01.138518] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:05.893 [2024-07-23 15:10:01.138730] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:16:05.893 BaseBdev2 00:16:05.893 [2024-07-23 15:10:01.139352] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:16:05.893 [2024-07-23 15:10:01.139511] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:16:05.893 [2024-07-23 15:10:01.139716] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.893 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:05.893 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:05.893 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:05.893 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:05.893 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:05.893 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:05.893 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.151 [ 00:16:06.151 { 00:16:06.151 "name": "BaseBdev2", 00:16:06.151 "aliases": [ 00:16:06.151 "d4b82ca6-ad6c-4925-b5e1-4ea19830429c" 00:16:06.151 ], 00:16:06.151 "product_name": "Malloc disk", 00:16:06.151 "block_size": 512, 00:16:06.151 "num_blocks": 65536, 00:16:06.151 "uuid": "d4b82ca6-ad6c-4925-b5e1-4ea19830429c", 00:16:06.151 "assigned_rate_limits": { 00:16:06.151 "rw_ios_per_sec": 0, 00:16:06.151 "rw_mbytes_per_sec": 0, 00:16:06.151 "r_mbytes_per_sec": 0, 00:16:06.151 "w_mbytes_per_sec": 0 00:16:06.151 }, 00:16:06.151 "claimed": true, 00:16:06.151 "claim_type": "exclusive_write", 00:16:06.151 "zoned": false, 00:16:06.151 "supported_io_types": { 00:16:06.151 "read": true, 00:16:06.151 "write": true, 00:16:06.151 "unmap": true, 00:16:06.151 "flush": true, 00:16:06.151 "reset": true, 00:16:06.151 "nvme_admin": false, 00:16:06.151 "nvme_io": false, 00:16:06.151 "nvme_io_md": false, 00:16:06.151 "write_zeroes": true, 00:16:06.151 "zcopy": true, 00:16:06.151 "get_zone_info": false, 00:16:06.151 "zone_management": false, 00:16:06.151 "zone_append": false, 00:16:06.151 "compare": false, 00:16:06.151 "compare_and_write": false, 00:16:06.151 "abort": true, 00:16:06.151 "seek_hole": false, 00:16:06.151 "seek_data": false, 00:16:06.151 "copy": true, 00:16:06.151 "nvme_iov_md": false 00:16:06.151 }, 00:16:06.151 "memory_domains": [ 00:16:06.151 { 00:16:06.151 "dma_device_id": "system", 00:16:06.151 "dma_device_type": 1 00:16:06.151 }, 00:16:06.151 { 00:16:06.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.151 "dma_device_type": 2 00:16:06.151 } 00:16:06.151 ], 00:16:06.151 "driver_specific": {} 00:16:06.151 } 00:16:06.151 ] 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.151 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.410 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.410 "name": "Existed_Raid", 00:16:06.410 "uuid": "3d0efef8-cae0-456d-96d5-3aeb146e332c", 00:16:06.410 "strip_size_kb": 0, 00:16:06.410 "state": "online", 00:16:06.410 "raid_level": "raid1", 00:16:06.410 "superblock": true, 00:16:06.410 "num_base_bdevs": 2, 00:16:06.410 "num_base_bdevs_discovered": 2, 00:16:06.410 "num_base_bdevs_operational": 2, 00:16:06.410 "base_bdevs_list": [ 00:16:06.410 { 00:16:06.410 "name": "BaseBdev1", 00:16:06.410 "uuid": "ab86ae37-b709-46ff-a5c0-79d34c479b91", 00:16:06.410 "is_configured": true, 00:16:06.410 "data_offset": 2048, 00:16:06.410 "data_size": 63488 00:16:06.410 }, 00:16:06.410 { 00:16:06.410 "name": "BaseBdev2", 00:16:06.410 "uuid": "d4b82ca6-ad6c-4925-b5e1-4ea19830429c", 00:16:06.410 "is_configured": true, 00:16:06.410 "data_offset": 2048, 00:16:06.410 "data_size": 63488 00:16:06.410 } 00:16:06.410 ] 00:16:06.410 }' 00:16:06.410 15:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.410 15:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.668 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.668 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:06.668 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:06.668 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:06.668 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:06.668 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:06.668 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:06.668 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:06.927 [2024-07-23 15:10:02.302377] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.927 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:06.927 "name": "Existed_Raid", 00:16:06.927 "aliases": [ 00:16:06.927 "3d0efef8-cae0-456d-96d5-3aeb146e332c" 00:16:06.927 ], 00:16:06.927 "product_name": "Raid Volume", 00:16:06.927 "block_size": 512, 00:16:06.927 "num_blocks": 63488, 00:16:06.927 "uuid": "3d0efef8-cae0-456d-96d5-3aeb146e332c", 00:16:06.927 "assigned_rate_limits": { 00:16:06.927 "rw_ios_per_sec": 0, 00:16:06.927 "rw_mbytes_per_sec": 0, 00:16:06.927 "r_mbytes_per_sec": 0, 00:16:06.927 "w_mbytes_per_sec": 0 00:16:06.927 }, 00:16:06.927 "claimed": false, 00:16:06.927 "zoned": false, 00:16:06.927 "supported_io_types": { 00:16:06.927 "read": true, 00:16:06.927 "write": true, 00:16:06.927 "unmap": false, 00:16:06.927 "flush": false, 00:16:06.927 "reset": true, 00:16:06.927 "nvme_admin": false, 00:16:06.927 "nvme_io": false, 00:16:06.927 "nvme_io_md": false, 00:16:06.927 "write_zeroes": true, 00:16:06.927 "zcopy": false, 00:16:06.927 "get_zone_info": false, 00:16:06.927 "zone_management": false, 00:16:06.927 "zone_append": false, 00:16:06.927 "compare": false, 00:16:06.927 "compare_and_write": false, 00:16:06.927 "abort": false, 00:16:06.927 "seek_hole": false, 00:16:06.927 "seek_data": false, 00:16:06.927 "copy": false, 00:16:06.927 "nvme_iov_md": false 00:16:06.927 }, 00:16:06.927 "memory_domains": [ 00:16:06.927 { 00:16:06.927 "dma_device_id": "system", 00:16:06.927 "dma_device_type": 1 00:16:06.927 }, 00:16:06.927 { 00:16:06.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.927 "dma_device_type": 2 00:16:06.927 }, 00:16:06.927 { 00:16:06.927 "dma_device_id": "system", 00:16:06.927 "dma_device_type": 1 00:16:06.927 }, 00:16:06.927 { 00:16:06.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.927 "dma_device_type": 2 00:16:06.927 } 00:16:06.927 ], 00:16:06.927 "driver_specific": { 00:16:06.927 "raid": { 00:16:06.927 "uuid": "3d0efef8-cae0-456d-96d5-3aeb146e332c", 00:16:06.927 "strip_size_kb": 0, 00:16:06.927 "state": "online", 00:16:06.927 "raid_level": "raid1", 00:16:06.927 "superblock": true, 00:16:06.927 "num_base_bdevs": 2, 00:16:06.927 "num_base_bdevs_discovered": 2, 00:16:06.927 "num_base_bdevs_operational": 2, 00:16:06.927 "base_bdevs_list": [ 00:16:06.927 { 00:16:06.927 "name": "BaseBdev1", 00:16:06.927 "uuid": "ab86ae37-b709-46ff-a5c0-79d34c479b91", 00:16:06.927 "is_configured": true, 00:16:06.927 "data_offset": 2048, 00:16:06.927 "data_size": 63488 00:16:06.927 }, 00:16:06.927 { 00:16:06.927 "name": "BaseBdev2", 00:16:06.927 "uuid": "d4b82ca6-ad6c-4925-b5e1-4ea19830429c", 00:16:06.927 "is_configured": true, 00:16:06.927 "data_offset": 2048, 00:16:06.927 "data_size": 63488 00:16:06.927 } 00:16:06.927 ] 00:16:06.927 } 00:16:06.927 } 00:16:06.927 }' 00:16:06.927 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.927 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:06.927 BaseBdev2' 00:16:06.927 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.927 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:06.927 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.185 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.185 "name": "BaseBdev1", 00:16:07.185 "aliases": [ 00:16:07.185 "ab86ae37-b709-46ff-a5c0-79d34c479b91" 00:16:07.185 ], 00:16:07.185 "product_name": "Malloc disk", 00:16:07.185 "block_size": 512, 00:16:07.185 "num_blocks": 65536, 00:16:07.185 "uuid": "ab86ae37-b709-46ff-a5c0-79d34c479b91", 00:16:07.185 "assigned_rate_limits": { 00:16:07.185 "rw_ios_per_sec": 0, 00:16:07.185 "rw_mbytes_per_sec": 0, 00:16:07.185 "r_mbytes_per_sec": 0, 00:16:07.185 "w_mbytes_per_sec": 0 00:16:07.185 }, 00:16:07.185 "claimed": true, 00:16:07.185 "claim_type": "exclusive_write", 00:16:07.185 "zoned": false, 00:16:07.185 "supported_io_types": { 00:16:07.185 "read": true, 00:16:07.185 "write": true, 00:16:07.186 "unmap": true, 00:16:07.186 "flush": true, 00:16:07.186 "reset": true, 00:16:07.186 "nvme_admin": false, 00:16:07.186 "nvme_io": false, 00:16:07.186 "nvme_io_md": false, 00:16:07.186 "write_zeroes": true, 00:16:07.186 "zcopy": true, 00:16:07.186 "get_zone_info": false, 00:16:07.186 "zone_management": false, 00:16:07.186 "zone_append": false, 00:16:07.186 "compare": false, 00:16:07.186 "compare_and_write": false, 00:16:07.186 "abort": true, 00:16:07.186 "seek_hole": false, 00:16:07.186 "seek_data": false, 00:16:07.186 "copy": true, 00:16:07.186 "nvme_iov_md": false 00:16:07.186 }, 00:16:07.186 "memory_domains": [ 00:16:07.186 { 00:16:07.186 "dma_device_id": "system", 00:16:07.186 "dma_device_type": 1 00:16:07.186 }, 00:16:07.186 { 00:16:07.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.186 "dma_device_type": 2 00:16:07.186 } 00:16:07.186 ], 00:16:07.186 "driver_specific": {} 00:16:07.186 }' 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:07.186 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.445 "name": "BaseBdev2", 00:16:07.445 "aliases": [ 00:16:07.445 "d4b82ca6-ad6c-4925-b5e1-4ea19830429c" 00:16:07.445 ], 00:16:07.445 "product_name": "Malloc disk", 00:16:07.445 "block_size": 512, 00:16:07.445 "num_blocks": 65536, 00:16:07.445 "uuid": "d4b82ca6-ad6c-4925-b5e1-4ea19830429c", 00:16:07.445 "assigned_rate_limits": { 00:16:07.445 "rw_ios_per_sec": 0, 00:16:07.445 "rw_mbytes_per_sec": 0, 00:16:07.445 "r_mbytes_per_sec": 0, 00:16:07.445 "w_mbytes_per_sec": 0 00:16:07.445 }, 00:16:07.445 "claimed": true, 00:16:07.445 "claim_type": "exclusive_write", 00:16:07.445 "zoned": false, 00:16:07.445 "supported_io_types": { 00:16:07.445 "read": true, 00:16:07.445 "write": true, 00:16:07.445 "unmap": true, 00:16:07.445 "flush": true, 00:16:07.445 "reset": true, 00:16:07.445 "nvme_admin": false, 00:16:07.445 "nvme_io": false, 00:16:07.445 "nvme_io_md": false, 00:16:07.445 "write_zeroes": true, 00:16:07.445 "zcopy": true, 00:16:07.445 "get_zone_info": false, 00:16:07.445 "zone_management": false, 00:16:07.445 "zone_append": false, 00:16:07.445 "compare": false, 00:16:07.445 "compare_and_write": false, 00:16:07.445 "abort": true, 00:16:07.445 "seek_hole": false, 00:16:07.445 "seek_data": false, 00:16:07.445 "copy": true, 00:16:07.445 "nvme_iov_md": false 00:16:07.445 }, 00:16:07.445 "memory_domains": [ 00:16:07.445 { 00:16:07.445 "dma_device_id": "system", 00:16:07.445 "dma_device_type": 1 00:16:07.445 }, 00:16:07.445 { 00:16:07.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.445 "dma_device_type": 2 00:16:07.445 } 00:16:07.445 ], 00:16:07.445 "driver_specific": {} 00:16:07.445 }' 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.445 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.704 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.704 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.704 15:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:07.704 [2024-07-23 15:10:03.046372] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.704 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.963 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.963 "name": "Existed_Raid", 00:16:07.963 "uuid": "3d0efef8-cae0-456d-96d5-3aeb146e332c", 00:16:07.963 "strip_size_kb": 0, 00:16:07.963 "state": "online", 00:16:07.963 "raid_level": "raid1", 00:16:07.963 "superblock": true, 00:16:07.963 "num_base_bdevs": 2, 00:16:07.963 "num_base_bdevs_discovered": 1, 00:16:07.963 "num_base_bdevs_operational": 1, 00:16:07.963 "base_bdevs_list": [ 00:16:07.963 { 00:16:07.963 "name": null, 00:16:07.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.963 "is_configured": false, 00:16:07.963 "data_offset": 2048, 00:16:07.963 "data_size": 63488 00:16:07.963 }, 00:16:07.963 { 00:16:07.963 "name": "BaseBdev2", 00:16:07.963 "uuid": "d4b82ca6-ad6c-4925-b5e1-4ea19830429c", 00:16:07.963 "is_configured": true, 00:16:07.963 "data_offset": 2048, 00:16:07.963 "data_size": 63488 00:16:07.963 } 00:16:07.963 ] 00:16:07.963 }' 00:16:07.963 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.963 15:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.529 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:08.529 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:08.529 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.529 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:08.529 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:08.529 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.529 15:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:08.787 [2024-07-23 15:10:04.015095] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.787 [2024-07-23 15:10:04.015211] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.787 [2024-07-23 15:10:04.027866] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.787 [2024-07-23 15:10:04.027919] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.787 [2024-07-23 15:10:04.027935] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:16:08.787 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:08.787 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:08.787 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.787 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 90212 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 90212 ']' 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 90212 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90212 00:16:09.046 killing process with pid 90212 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90212' 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 90212 00:16:09.046 [2024-07-23 15:10:04.325982] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.046 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 90212 00:16:09.046 [2024-07-23 15:10:04.326062] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.305 15:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:09.305 00:16:09.305 real 0m7.923s 00:16:09.305 user 0m13.255s 00:16:09.305 sys 0m1.713s 00:16:09.305 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.305 ************************************ 00:16:09.305 END TEST raid_state_function_test_sb 00:16:09.305 ************************************ 00:16:09.305 15:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.305 15:10:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:09.305 15:10:04 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:09.305 15:10:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:09.305 15:10:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.305 15:10:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.305 ************************************ 00:16:09.305 START TEST raid_superblock_test 00:16:09.305 ************************************ 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=90539 00:16:09.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 90539 /var/tmp/spdk-raid.sock 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 90539 ']' 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.305 15:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.305 [2024-07-23 15:10:04.708741] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:16:09.305 [2024-07-23 15:10:04.708954] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90539 ] 00:16:09.565 [2024-07-23 15:10:04.863589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.565 [2024-07-23 15:10:04.908692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.565 [2024-07-23 15:10:04.953239] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:10.524 malloc1 00:16:10.524 15:10:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:10.783 [2024-07-23 15:10:06.020410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:10.783 [2024-07-23 15:10:06.020694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.783 [2024-07-23 15:10:06.020775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:16:10.783 [2024-07-23 15:10:06.020884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.783 [2024-07-23 15:10:06.023441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.783 [2024-07-23 15:10:06.023596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:10.783 pt1 00:16:10.783 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:10.783 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:10.783 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:10.783 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:10.784 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:10.784 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:10.784 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:10.784 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:10.784 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:11.042 malloc2 00:16:11.042 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:11.042 [2024-07-23 15:10:06.470416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:11.042 [2024-07-23 15:10:06.470667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.042 [2024-07-23 15:10:06.470875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:16:11.042 [2024-07-23 15:10:06.470996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.301 [2024-07-23 15:10:06.473679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.301 [2024-07-23 15:10:06.473848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:11.301 pt2 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:11.301 [2024-07-23 15:10:06.638590] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:11.301 [2024-07-23 15:10:06.640817] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:11.301 [2024-07-23 15:10:06.641003] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006c80 00:16:11.301 [2024-07-23 15:10:06.641026] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:11.301 [2024-07-23 15:10:06.641134] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:16:11.301 [2024-07-23 15:10:06.641477] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006c80 00:16:11.301 [2024-07-23 15:10:06.641490] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000006c80 00:16:11.301 [2024-07-23 15:10:06.641628] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.301 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.560 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.560 "name": "raid_bdev1", 00:16:11.560 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:11.560 "strip_size_kb": 0, 00:16:11.560 "state": "online", 00:16:11.560 "raid_level": "raid1", 00:16:11.560 "superblock": true, 00:16:11.560 "num_base_bdevs": 2, 00:16:11.560 "num_base_bdevs_discovered": 2, 00:16:11.560 "num_base_bdevs_operational": 2, 00:16:11.560 "base_bdevs_list": [ 00:16:11.560 { 00:16:11.560 "name": "pt1", 00:16:11.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.560 "is_configured": true, 00:16:11.560 "data_offset": 2048, 00:16:11.560 "data_size": 63488 00:16:11.560 }, 00:16:11.560 { 00:16:11.560 "name": "pt2", 00:16:11.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.560 "is_configured": true, 00:16:11.560 "data_offset": 2048, 00:16:11.560 "data_size": 63488 00:16:11.560 } 00:16:11.560 ] 00:16:11.560 }' 00:16:11.560 15:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.560 15:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.819 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:11.819 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:11.819 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:11.819 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:11.819 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:11.819 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:11.819 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:11.819 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:11.819 [2024-07-23 15:10:07.246956] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.079 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:12.079 "name": "raid_bdev1", 00:16:12.079 "aliases": [ 00:16:12.079 "06089aed-8ccb-4973-b1be-1d199cccb939" 00:16:12.079 ], 00:16:12.079 "product_name": "Raid Volume", 00:16:12.079 "block_size": 512, 00:16:12.079 "num_blocks": 63488, 00:16:12.079 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:12.079 "assigned_rate_limits": { 00:16:12.079 "rw_ios_per_sec": 0, 00:16:12.079 "rw_mbytes_per_sec": 0, 00:16:12.079 "r_mbytes_per_sec": 0, 00:16:12.079 "w_mbytes_per_sec": 0 00:16:12.079 }, 00:16:12.079 "claimed": false, 00:16:12.079 "zoned": false, 00:16:12.079 "supported_io_types": { 00:16:12.079 "read": true, 00:16:12.079 "write": true, 00:16:12.079 "unmap": false, 00:16:12.079 "flush": false, 00:16:12.079 "reset": true, 00:16:12.079 "nvme_admin": false, 00:16:12.079 "nvme_io": false, 00:16:12.079 "nvme_io_md": false, 00:16:12.079 "write_zeroes": true, 00:16:12.079 "zcopy": false, 00:16:12.079 "get_zone_info": false, 00:16:12.079 "zone_management": false, 00:16:12.079 "zone_append": false, 00:16:12.079 "compare": false, 00:16:12.079 "compare_and_write": false, 00:16:12.079 "abort": false, 00:16:12.079 "seek_hole": false, 00:16:12.079 "seek_data": false, 00:16:12.079 "copy": false, 00:16:12.079 "nvme_iov_md": false 00:16:12.079 }, 00:16:12.079 "memory_domains": [ 00:16:12.079 { 00:16:12.079 "dma_device_id": "system", 00:16:12.079 "dma_device_type": 1 00:16:12.079 }, 00:16:12.079 { 00:16:12.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.079 "dma_device_type": 2 00:16:12.079 }, 00:16:12.079 { 00:16:12.079 "dma_device_id": "system", 00:16:12.079 "dma_device_type": 1 00:16:12.079 }, 00:16:12.079 { 00:16:12.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.079 "dma_device_type": 2 00:16:12.079 } 00:16:12.079 ], 00:16:12.079 "driver_specific": { 00:16:12.079 "raid": { 00:16:12.079 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:12.079 "strip_size_kb": 0, 00:16:12.079 "state": "online", 00:16:12.079 "raid_level": "raid1", 00:16:12.079 "superblock": true, 00:16:12.079 "num_base_bdevs": 2, 00:16:12.079 "num_base_bdevs_discovered": 2, 00:16:12.079 "num_base_bdevs_operational": 2, 00:16:12.079 "base_bdevs_list": [ 00:16:12.079 { 00:16:12.079 "name": "pt1", 00:16:12.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:12.079 "is_configured": true, 00:16:12.079 "data_offset": 2048, 00:16:12.079 "data_size": 63488 00:16:12.079 }, 00:16:12.079 { 00:16:12.079 "name": "pt2", 00:16:12.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.079 "is_configured": true, 00:16:12.079 "data_offset": 2048, 00:16:12.079 "data_size": 63488 00:16:12.079 } 00:16:12.079 ] 00:16:12.079 } 00:16:12.079 } 00:16:12.079 }' 00:16:12.079 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.079 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:12.079 pt2' 00:16:12.079 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.079 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:12.079 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.339 "name": "pt1", 00:16:12.339 "aliases": [ 00:16:12.339 "00000000-0000-0000-0000-000000000001" 00:16:12.339 ], 00:16:12.339 "product_name": "passthru", 00:16:12.339 "block_size": 512, 00:16:12.339 "num_blocks": 65536, 00:16:12.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:12.339 "assigned_rate_limits": { 00:16:12.339 "rw_ios_per_sec": 0, 00:16:12.339 "rw_mbytes_per_sec": 0, 00:16:12.339 "r_mbytes_per_sec": 0, 00:16:12.339 "w_mbytes_per_sec": 0 00:16:12.339 }, 00:16:12.339 "claimed": true, 00:16:12.339 "claim_type": "exclusive_write", 00:16:12.339 "zoned": false, 00:16:12.339 "supported_io_types": { 00:16:12.339 "read": true, 00:16:12.339 "write": true, 00:16:12.339 "unmap": true, 00:16:12.339 "flush": true, 00:16:12.339 "reset": true, 00:16:12.339 "nvme_admin": false, 00:16:12.339 "nvme_io": false, 00:16:12.339 "nvme_io_md": false, 00:16:12.339 "write_zeroes": true, 00:16:12.339 "zcopy": true, 00:16:12.339 "get_zone_info": false, 00:16:12.339 "zone_management": false, 00:16:12.339 "zone_append": false, 00:16:12.339 "compare": false, 00:16:12.339 "compare_and_write": false, 00:16:12.339 "abort": true, 00:16:12.339 "seek_hole": false, 00:16:12.339 "seek_data": false, 00:16:12.339 "copy": true, 00:16:12.339 "nvme_iov_md": false 00:16:12.339 }, 00:16:12.339 "memory_domains": [ 00:16:12.339 { 00:16:12.339 "dma_device_id": "system", 00:16:12.339 "dma_device_type": 1 00:16:12.339 }, 00:16:12.339 { 00:16:12.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.339 "dma_device_type": 2 00:16:12.339 } 00:16:12.339 ], 00:16:12.339 "driver_specific": { 00:16:12.339 "passthru": { 00:16:12.339 "name": "pt1", 00:16:12.339 "base_bdev_name": "malloc1" 00:16:12.339 } 00:16:12.339 } 00:16:12.339 }' 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:12.339 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.598 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.598 "name": "pt2", 00:16:12.598 "aliases": [ 00:16:12.598 "00000000-0000-0000-0000-000000000002" 00:16:12.598 ], 00:16:12.598 "product_name": "passthru", 00:16:12.598 "block_size": 512, 00:16:12.598 "num_blocks": 65536, 00:16:12.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.598 "assigned_rate_limits": { 00:16:12.598 "rw_ios_per_sec": 0, 00:16:12.598 "rw_mbytes_per_sec": 0, 00:16:12.598 "r_mbytes_per_sec": 0, 00:16:12.598 "w_mbytes_per_sec": 0 00:16:12.598 }, 00:16:12.598 "claimed": true, 00:16:12.598 "claim_type": "exclusive_write", 00:16:12.598 "zoned": false, 00:16:12.598 "supported_io_types": { 00:16:12.598 "read": true, 00:16:12.598 "write": true, 00:16:12.598 "unmap": true, 00:16:12.598 "flush": true, 00:16:12.598 "reset": true, 00:16:12.598 "nvme_admin": false, 00:16:12.598 "nvme_io": false, 00:16:12.598 "nvme_io_md": false, 00:16:12.598 "write_zeroes": true, 00:16:12.598 "zcopy": true, 00:16:12.598 "get_zone_info": false, 00:16:12.598 "zone_management": false, 00:16:12.598 "zone_append": false, 00:16:12.598 "compare": false, 00:16:12.598 "compare_and_write": false, 00:16:12.598 "abort": true, 00:16:12.598 "seek_hole": false, 00:16:12.598 "seek_data": false, 00:16:12.598 "copy": true, 00:16:12.598 "nvme_iov_md": false 00:16:12.598 }, 00:16:12.598 "memory_domains": [ 00:16:12.598 { 00:16:12.598 "dma_device_id": "system", 00:16:12.598 "dma_device_type": 1 00:16:12.598 }, 00:16:12.598 { 00:16:12.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.599 "dma_device_type": 2 00:16:12.599 } 00:16:12.599 ], 00:16:12.599 "driver_specific": { 00:16:12.599 "passthru": { 00:16:12.599 "name": "pt2", 00:16:12.599 "base_bdev_name": "malloc2" 00:16:12.599 } 00:16:12.599 } 00:16:12.599 }' 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:12.599 15:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:12.858 [2024-07-23 15:10:08.143106] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.858 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=06089aed-8ccb-4973-b1be-1d199cccb939 00:16:12.858 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 06089aed-8ccb-4973-b1be-1d199cccb939 ']' 00:16:12.858 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:13.117 [2024-07-23 15:10:08.386865] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.117 [2024-07-23 15:10:08.386923] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.117 [2024-07-23 15:10:08.387020] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.117 [2024-07-23 15:10:08.387090] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.117 [2024-07-23 15:10:08.387109] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006c80 name raid_bdev1, state offline 00:16:13.117 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.117 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:13.376 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:13.376 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:13.376 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:13.376 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:13.635 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:13.635 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:13.635 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:13.635 15:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:13.894 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:14.153 [2024-07-23 15:10:09.343123] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:14.153 [2024-07-23 15:10:09.345608] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:14.153 [2024-07-23 15:10:09.345829] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:14.153 [2024-07-23 15:10:09.346013] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:14.153 [2024-07-23 15:10:09.346142] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.153 [2024-07-23 15:10:09.346176] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state configuring 00:16:14.153 request: 00:16:14.153 { 00:16:14.153 "name": "raid_bdev1", 00:16:14.153 "raid_level": "raid1", 00:16:14.153 "base_bdevs": [ 00:16:14.153 "malloc1", 00:16:14.153 "malloc2" 00:16:14.153 ], 00:16:14.153 "superblock": false, 00:16:14.153 "method": "bdev_raid_create", 00:16:14.153 "req_id": 1 00:16:14.153 } 00:16:14.153 Got JSON-RPC error response 00:16:14.153 response: 00:16:14.153 { 00:16:14.153 "code": -17, 00:16:14.153 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:14.153 } 00:16:14.153 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:14.153 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:14.153 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:14.153 15:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:14.153 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:14.154 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:14.412 [2024-07-23 15:10:09.759178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:14.412 [2024-07-23 15:10:09.759388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.412 [2024-07-23 15:10:09.759425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:16:14.412 [2024-07-23 15:10:09.759438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.412 [2024-07-23 15:10:09.762010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.412 [2024-07-23 15:10:09.762050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:14.412 [2024-07-23 15:10:09.762134] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:14.412 [2024-07-23 15:10:09.762192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:14.412 pt1 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.412 15:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.670 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.670 "name": "raid_bdev1", 00:16:14.670 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:14.670 "strip_size_kb": 0, 00:16:14.670 "state": "configuring", 00:16:14.670 "raid_level": "raid1", 00:16:14.670 "superblock": true, 00:16:14.670 "num_base_bdevs": 2, 00:16:14.670 "num_base_bdevs_discovered": 1, 00:16:14.670 "num_base_bdevs_operational": 2, 00:16:14.670 "base_bdevs_list": [ 00:16:14.670 { 00:16:14.671 "name": "pt1", 00:16:14.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.671 "is_configured": true, 00:16:14.671 "data_offset": 2048, 00:16:14.671 "data_size": 63488 00:16:14.671 }, 00:16:14.671 { 00:16:14.671 "name": null, 00:16:14.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.671 "is_configured": false, 00:16:14.671 "data_offset": 2048, 00:16:14.671 "data_size": 63488 00:16:14.671 } 00:16:14.671 ] 00:16:14.671 }' 00:16:14.671 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.671 15:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.929 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:14.929 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:14.929 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:14.929 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.187 [2024-07-23 15:10:10.519337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.187 [2024-07-23 15:10:10.519416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.187 [2024-07-23 15:10:10.519446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:16:15.187 [2024-07-23 15:10:10.519459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.187 [2024-07-23 15:10:10.520104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.187 [2024-07-23 15:10:10.520221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.187 [2024-07-23 15:10:10.520405] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:15.187 [2024-07-23 15:10:10.520544] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.187 [2024-07-23 15:10:10.520692] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:16:15.187 [2024-07-23 15:10:10.520703] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:15.187 [2024-07-23 15:10:10.520812] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:16:15.187 [2024-07-23 15:10:10.521101] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:16:15.187 [2024-07-23 15:10:10.521116] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:16:15.187 [2024-07-23 15:10:10.521210] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.187 pt2 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.187 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.446 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.446 "name": "raid_bdev1", 00:16:15.446 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:15.446 "strip_size_kb": 0, 00:16:15.446 "state": "online", 00:16:15.446 "raid_level": "raid1", 00:16:15.446 "superblock": true, 00:16:15.446 "num_base_bdevs": 2, 00:16:15.446 "num_base_bdevs_discovered": 2, 00:16:15.446 "num_base_bdevs_operational": 2, 00:16:15.446 "base_bdevs_list": [ 00:16:15.446 { 00:16:15.446 "name": "pt1", 00:16:15.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.446 "is_configured": true, 00:16:15.446 "data_offset": 2048, 00:16:15.446 "data_size": 63488 00:16:15.446 }, 00:16:15.446 { 00:16:15.446 "name": "pt2", 00:16:15.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.446 "is_configured": true, 00:16:15.446 "data_offset": 2048, 00:16:15.446 "data_size": 63488 00:16:15.446 } 00:16:15.446 ] 00:16:15.446 }' 00:16:15.446 15:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.446 15:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:16.014 [2024-07-23 15:10:11.295737] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:16.014 "name": "raid_bdev1", 00:16:16.014 "aliases": [ 00:16:16.014 "06089aed-8ccb-4973-b1be-1d199cccb939" 00:16:16.014 ], 00:16:16.014 "product_name": "Raid Volume", 00:16:16.014 "block_size": 512, 00:16:16.014 "num_blocks": 63488, 00:16:16.014 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:16.014 "assigned_rate_limits": { 00:16:16.014 "rw_ios_per_sec": 0, 00:16:16.014 "rw_mbytes_per_sec": 0, 00:16:16.014 "r_mbytes_per_sec": 0, 00:16:16.014 "w_mbytes_per_sec": 0 00:16:16.014 }, 00:16:16.014 "claimed": false, 00:16:16.014 "zoned": false, 00:16:16.014 "supported_io_types": { 00:16:16.014 "read": true, 00:16:16.014 "write": true, 00:16:16.014 "unmap": false, 00:16:16.014 "flush": false, 00:16:16.014 "reset": true, 00:16:16.014 "nvme_admin": false, 00:16:16.014 "nvme_io": false, 00:16:16.014 "nvme_io_md": false, 00:16:16.014 "write_zeroes": true, 00:16:16.014 "zcopy": false, 00:16:16.014 "get_zone_info": false, 00:16:16.014 "zone_management": false, 00:16:16.014 "zone_append": false, 00:16:16.014 "compare": false, 00:16:16.014 "compare_and_write": false, 00:16:16.014 "abort": false, 00:16:16.014 "seek_hole": false, 00:16:16.014 "seek_data": false, 00:16:16.014 "copy": false, 00:16:16.014 "nvme_iov_md": false 00:16:16.014 }, 00:16:16.014 "memory_domains": [ 00:16:16.014 { 00:16:16.014 "dma_device_id": "system", 00:16:16.014 "dma_device_type": 1 00:16:16.014 }, 00:16:16.014 { 00:16:16.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.014 "dma_device_type": 2 00:16:16.014 }, 00:16:16.014 { 00:16:16.014 "dma_device_id": "system", 00:16:16.014 "dma_device_type": 1 00:16:16.014 }, 00:16:16.014 { 00:16:16.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.014 "dma_device_type": 2 00:16:16.014 } 00:16:16.014 ], 00:16:16.014 "driver_specific": { 00:16:16.014 "raid": { 00:16:16.014 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:16.014 "strip_size_kb": 0, 00:16:16.014 "state": "online", 00:16:16.014 "raid_level": "raid1", 00:16:16.014 "superblock": true, 00:16:16.014 "num_base_bdevs": 2, 00:16:16.014 "num_base_bdevs_discovered": 2, 00:16:16.014 "num_base_bdevs_operational": 2, 00:16:16.014 "base_bdevs_list": [ 00:16:16.014 { 00:16:16.014 "name": "pt1", 00:16:16.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.014 "is_configured": true, 00:16:16.014 "data_offset": 2048, 00:16:16.014 "data_size": 63488 00:16:16.014 }, 00:16:16.014 { 00:16:16.014 "name": "pt2", 00:16:16.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.014 "is_configured": true, 00:16:16.014 "data_offset": 2048, 00:16:16.014 "data_size": 63488 00:16:16.014 } 00:16:16.014 ] 00:16:16.014 } 00:16:16.014 } 00:16:16.014 }' 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:16.014 pt2' 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:16.014 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:16.273 "name": "pt1", 00:16:16.273 "aliases": [ 00:16:16.273 "00000000-0000-0000-0000-000000000001" 00:16:16.273 ], 00:16:16.273 "product_name": "passthru", 00:16:16.273 "block_size": 512, 00:16:16.273 "num_blocks": 65536, 00:16:16.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.273 "assigned_rate_limits": { 00:16:16.273 "rw_ios_per_sec": 0, 00:16:16.273 "rw_mbytes_per_sec": 0, 00:16:16.273 "r_mbytes_per_sec": 0, 00:16:16.273 "w_mbytes_per_sec": 0 00:16:16.273 }, 00:16:16.273 "claimed": true, 00:16:16.273 "claim_type": "exclusive_write", 00:16:16.273 "zoned": false, 00:16:16.273 "supported_io_types": { 00:16:16.273 "read": true, 00:16:16.273 "write": true, 00:16:16.273 "unmap": true, 00:16:16.273 "flush": true, 00:16:16.273 "reset": true, 00:16:16.273 "nvme_admin": false, 00:16:16.273 "nvme_io": false, 00:16:16.273 "nvme_io_md": false, 00:16:16.273 "write_zeroes": true, 00:16:16.273 "zcopy": true, 00:16:16.273 "get_zone_info": false, 00:16:16.273 "zone_management": false, 00:16:16.273 "zone_append": false, 00:16:16.273 "compare": false, 00:16:16.273 "compare_and_write": false, 00:16:16.273 "abort": true, 00:16:16.273 "seek_hole": false, 00:16:16.273 "seek_data": false, 00:16:16.273 "copy": true, 00:16:16.273 "nvme_iov_md": false 00:16:16.273 }, 00:16:16.273 "memory_domains": [ 00:16:16.273 { 00:16:16.273 "dma_device_id": "system", 00:16:16.273 "dma_device_type": 1 00:16:16.273 }, 00:16:16.273 { 00:16:16.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.273 "dma_device_type": 2 00:16:16.273 } 00:16:16.273 ], 00:16:16.273 "driver_specific": { 00:16:16.273 "passthru": { 00:16:16.273 "name": "pt1", 00:16:16.273 "base_bdev_name": "malloc1" 00:16:16.273 } 00:16:16.273 } 00:16:16.273 }' 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:16.273 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:16.532 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:16.532 "name": "pt2", 00:16:16.532 "aliases": [ 00:16:16.532 "00000000-0000-0000-0000-000000000002" 00:16:16.532 ], 00:16:16.532 "product_name": "passthru", 00:16:16.532 "block_size": 512, 00:16:16.532 "num_blocks": 65536, 00:16:16.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.532 "assigned_rate_limits": { 00:16:16.532 "rw_ios_per_sec": 0, 00:16:16.532 "rw_mbytes_per_sec": 0, 00:16:16.532 "r_mbytes_per_sec": 0, 00:16:16.532 "w_mbytes_per_sec": 0 00:16:16.532 }, 00:16:16.532 "claimed": true, 00:16:16.532 "claim_type": "exclusive_write", 00:16:16.532 "zoned": false, 00:16:16.532 "supported_io_types": { 00:16:16.532 "read": true, 00:16:16.532 "write": true, 00:16:16.532 "unmap": true, 00:16:16.532 "flush": true, 00:16:16.532 "reset": true, 00:16:16.532 "nvme_admin": false, 00:16:16.532 "nvme_io": false, 00:16:16.532 "nvme_io_md": false, 00:16:16.532 "write_zeroes": true, 00:16:16.532 "zcopy": true, 00:16:16.532 "get_zone_info": false, 00:16:16.532 "zone_management": false, 00:16:16.532 "zone_append": false, 00:16:16.532 "compare": false, 00:16:16.532 "compare_and_write": false, 00:16:16.532 "abort": true, 00:16:16.532 "seek_hole": false, 00:16:16.532 "seek_data": false, 00:16:16.532 "copy": true, 00:16:16.532 "nvme_iov_md": false 00:16:16.532 }, 00:16:16.532 "memory_domains": [ 00:16:16.532 { 00:16:16.532 "dma_device_id": "system", 00:16:16.532 "dma_device_type": 1 00:16:16.532 }, 00:16:16.532 { 00:16:16.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.532 "dma_device_type": 2 00:16:16.532 } 00:16:16.532 ], 00:16:16.532 "driver_specific": { 00:16:16.532 "passthru": { 00:16:16.532 "name": "pt2", 00:16:16.532 "base_bdev_name": "malloc2" 00:16:16.532 } 00:16:16.532 } 00:16:16.532 }' 00:16:16.532 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.532 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.532 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:16.532 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:16.533 15:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:16.791 [2024-07-23 15:10:12.191961] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.791 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 06089aed-8ccb-4973-b1be-1d199cccb939 '!=' 06089aed-8ccb-4973-b1be-1d199cccb939 ']' 00:16:16.791 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:16.791 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:16.791 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:16.791 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:17.050 [2024-07-23 15:10:12.375798] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.050 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.317 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.317 "name": "raid_bdev1", 00:16:17.317 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:17.317 "strip_size_kb": 0, 00:16:17.317 "state": "online", 00:16:17.317 "raid_level": "raid1", 00:16:17.317 "superblock": true, 00:16:17.317 "num_base_bdevs": 2, 00:16:17.317 "num_base_bdevs_discovered": 1, 00:16:17.317 "num_base_bdevs_operational": 1, 00:16:17.317 "base_bdevs_list": [ 00:16:17.317 { 00:16:17.317 "name": null, 00:16:17.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.317 "is_configured": false, 00:16:17.317 "data_offset": 2048, 00:16:17.317 "data_size": 63488 00:16:17.317 }, 00:16:17.317 { 00:16:17.317 "name": "pt2", 00:16:17.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.317 "is_configured": true, 00:16:17.317 "data_offset": 2048, 00:16:17.317 "data_size": 63488 00:16:17.317 } 00:16:17.317 ] 00:16:17.317 }' 00:16:17.317 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.317 15:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.605 15:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:17.864 [2024-07-23 15:10:13.099945] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.864 [2024-07-23 15:10:13.099988] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.864 [2024-07-23 15:10:13.100068] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.864 [2024-07-23 15:10:13.100131] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.864 [2024-07-23 15:10:13.100143] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:16:17.864 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:17.864 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:16:18.121 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.380 [2024-07-23 15:10:13.704079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.380 [2024-07-23 15:10:13.704163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.380 [2024-07-23 15:10:13.704192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:16:18.380 [2024-07-23 15:10:13.704205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.380 [2024-07-23 15:10:13.706727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.380 [2024-07-23 15:10:13.706773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.380 [2024-07-23 15:10:13.706872] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.380 [2024-07-23 15:10:13.706908] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.380 [2024-07-23 15:10:13.707016] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:16:18.380 [2024-07-23 15:10:13.707026] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:18.380 [2024-07-23 15:10:13.707107] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:16:18.380 [2024-07-23 15:10:13.707388] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:16:18.380 [2024-07-23 15:10:13.707412] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:16:18.380 [2024-07-23 15:10:13.707516] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.380 pt2 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.380 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.638 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.638 "name": "raid_bdev1", 00:16:18.638 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:18.638 "strip_size_kb": 0, 00:16:18.638 "state": "online", 00:16:18.638 "raid_level": "raid1", 00:16:18.638 "superblock": true, 00:16:18.638 "num_base_bdevs": 2, 00:16:18.638 "num_base_bdevs_discovered": 1, 00:16:18.638 "num_base_bdevs_operational": 1, 00:16:18.638 "base_bdevs_list": [ 00:16:18.638 { 00:16:18.638 "name": null, 00:16:18.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.638 "is_configured": false, 00:16:18.638 "data_offset": 2048, 00:16:18.638 "data_size": 63488 00:16:18.638 }, 00:16:18.638 { 00:16:18.638 "name": "pt2", 00:16:18.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.638 "is_configured": true, 00:16:18.638 "data_offset": 2048, 00:16:18.638 "data_size": 63488 00:16:18.638 } 00:16:18.638 ] 00:16:18.638 }' 00:16:18.638 15:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.638 15:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.897 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.156 [2024-07-23 15:10:14.416244] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.156 [2024-07-23 15:10:14.416289] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.156 [2024-07-23 15:10:14.416370] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.156 [2024-07-23 15:10:14.416424] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.156 [2024-07-23 15:10:14.416439] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:16:19.156 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.156 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:19.415 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:19.415 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:19.415 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:16:19.415 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:19.674 [2024-07-23 15:10:14.932319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:19.674 [2024-07-23 15:10:14.932402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.674 [2024-07-23 15:10:14.932423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:16:19.674 [2024-07-23 15:10:14.932438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.674 [2024-07-23 15:10:14.934902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.674 [2024-07-23 15:10:14.934953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:19.674 [2024-07-23 15:10:14.935030] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:19.674 [2024-07-23 15:10:14.935077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:19.674 [2024-07-23 15:10:14.935193] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:19.674 [2024-07-23 15:10:14.935209] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.674 [2024-07-23 15:10:14.935226] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state configuring 00:16:19.674 [2024-07-23 15:10:14.935271] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.674 [2024-07-23 15:10:14.935348] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:16:19.674 [2024-07-23 15:10:14.935361] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:19.674 [2024-07-23 15:10:14.935438] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:16:19.674 [2024-07-23 15:10:14.935716] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:16:19.674 [2024-07-23 15:10:14.935738] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:16:19.674 [2024-07-23 15:10:14.935857] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.674 pt1 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.674 15:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.933 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.933 "name": "raid_bdev1", 00:16:19.933 "uuid": "06089aed-8ccb-4973-b1be-1d199cccb939", 00:16:19.933 "strip_size_kb": 0, 00:16:19.933 "state": "online", 00:16:19.933 "raid_level": "raid1", 00:16:19.933 "superblock": true, 00:16:19.933 "num_base_bdevs": 2, 00:16:19.933 "num_base_bdevs_discovered": 1, 00:16:19.933 "num_base_bdevs_operational": 1, 00:16:19.933 "base_bdevs_list": [ 00:16:19.933 { 00:16:19.933 "name": null, 00:16:19.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.933 "is_configured": false, 00:16:19.933 "data_offset": 2048, 00:16:19.933 "data_size": 63488 00:16:19.933 }, 00:16:19.933 { 00:16:19.933 "name": "pt2", 00:16:19.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.933 "is_configured": true, 00:16:19.933 "data_offset": 2048, 00:16:19.933 "data_size": 63488 00:16:19.933 } 00:16:19.933 ] 00:16:19.933 }' 00:16:19.933 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.933 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.192 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:20.192 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:20.451 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:20.451 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:20.451 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:20.710 [2024-07-23 15:10:15.912728] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 06089aed-8ccb-4973-b1be-1d199cccb939 '!=' 06089aed-8ccb-4973-b1be-1d199cccb939 ']' 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 90539 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 90539 ']' 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 90539 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90539 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90539' 00:16:20.710 killing process with pid 90539 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 90539 00:16:20.710 [2024-07-23 15:10:15.967430] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.710 15:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 90539 00:16:20.710 [2024-07-23 15:10:15.967532] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.710 [2024-07-23 15:10:15.967602] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.710 [2024-07-23 15:10:15.967615] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:16:20.710 [2024-07-23 15:10:15.992389] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.970 15:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:20.970 00:16:20.970 real 0m11.598s 00:16:20.970 user 0m19.890s 00:16:20.970 sys 0m2.510s 00:16:20.970 15:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.970 ************************************ 00:16:20.970 END TEST raid_superblock_test 00:16:20.970 ************************************ 00:16:20.970 15:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.970 15:10:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:20.970 15:10:16 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:16:20.970 15:10:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:20.970 15:10:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.970 15:10:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.970 ************************************ 00:16:20.970 START TEST raid_read_error_test 00:16:20.970 ************************************ 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.sss9LvIfhx 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=90998 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 90998 /var/tmp/spdk-raid.sock 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 90998 ']' 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.970 15:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.970 [2024-07-23 15:10:16.378779] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:16:20.971 [2024-07-23 15:10:16.379072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90998 ] 00:16:21.230 [2024-07-23 15:10:16.534437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.230 [2024-07-23 15:10:16.588326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.230 [2024-07-23 15:10:16.642268] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.798 15:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.798 15:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:21.798 15:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:21.798 15:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:22.057 BaseBdev1_malloc 00:16:22.057 15:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:22.317 true 00:16:22.317 15:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:22.576 [2024-07-23 15:10:17.780911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:22.576 [2024-07-23 15:10:17.780991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.576 [2024-07-23 15:10:17.781037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:16:22.576 [2024-07-23 15:10:17.781056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.576 [2024-07-23 15:10:17.783980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.576 [2024-07-23 15:10:17.784033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.576 BaseBdev1 00:16:22.576 15:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:22.576 15:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:22.576 BaseBdev2_malloc 00:16:22.576 15:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:22.835 true 00:16:22.835 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:23.094 [2024-07-23 15:10:18.318591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:23.094 [2024-07-23 15:10:18.318676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.094 [2024-07-23 15:10:18.318722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:16:23.094 [2024-07-23 15:10:18.318739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.094 [2024-07-23 15:10:18.321486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.094 [2024-07-23 15:10:18.321533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:23.094 BaseBdev2 00:16:23.094 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:23.094 [2024-07-23 15:10:18.522674] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.353 [2024-07-23 15:10:18.525410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.353 [2024-07-23 15:10:18.525853] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:16:23.353 [2024-07-23 15:10:18.525977] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:23.353 [2024-07-23 15:10:18.526287] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:16:23.353 [2024-07-23 15:10:18.526875] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:16:23.353 [2024-07-23 15:10:18.527008] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007280 00:16:23.353 [2024-07-23 15:10:18.527407] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.353 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.353 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:23.353 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:23.353 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:23.353 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:23.353 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:23.353 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.354 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.354 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.354 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.354 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.354 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.354 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.354 "name": "raid_bdev1", 00:16:23.354 "uuid": "78457a8f-ece9-4fb8-acc7-8e8e3f960920", 00:16:23.354 "strip_size_kb": 0, 00:16:23.354 "state": "online", 00:16:23.354 "raid_level": "raid1", 00:16:23.354 "superblock": true, 00:16:23.354 "num_base_bdevs": 2, 00:16:23.354 "num_base_bdevs_discovered": 2, 00:16:23.354 "num_base_bdevs_operational": 2, 00:16:23.354 "base_bdevs_list": [ 00:16:23.354 { 00:16:23.354 "name": "BaseBdev1", 00:16:23.354 "uuid": "e82aff7d-7b9f-58bc-9b03-bea8a7019a58", 00:16:23.354 "is_configured": true, 00:16:23.354 "data_offset": 2048, 00:16:23.354 "data_size": 63488 00:16:23.354 }, 00:16:23.354 { 00:16:23.354 "name": "BaseBdev2", 00:16:23.354 "uuid": "08e82fba-1dbe-5e0d-a858-3f599078885a", 00:16:23.354 "is_configured": true, 00:16:23.354 "data_offset": 2048, 00:16:23.354 "data_size": 63488 00:16:23.354 } 00:16:23.354 ] 00:16:23.354 }' 00:16:23.354 15:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.354 15:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.612 15:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:23.612 15:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:23.871 [2024-07-23 15:10:19.099959] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.810 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.379 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.379 "name": "raid_bdev1", 00:16:25.379 "uuid": "78457a8f-ece9-4fb8-acc7-8e8e3f960920", 00:16:25.379 "strip_size_kb": 0, 00:16:25.379 "state": "online", 00:16:25.379 "raid_level": "raid1", 00:16:25.379 "superblock": true, 00:16:25.379 "num_base_bdevs": 2, 00:16:25.379 "num_base_bdevs_discovered": 2, 00:16:25.379 "num_base_bdevs_operational": 2, 00:16:25.379 "base_bdevs_list": [ 00:16:25.379 { 00:16:25.379 "name": "BaseBdev1", 00:16:25.379 "uuid": "e82aff7d-7b9f-58bc-9b03-bea8a7019a58", 00:16:25.379 "is_configured": true, 00:16:25.379 "data_offset": 2048, 00:16:25.379 "data_size": 63488 00:16:25.379 }, 00:16:25.379 { 00:16:25.379 "name": "BaseBdev2", 00:16:25.379 "uuid": "08e82fba-1dbe-5e0d-a858-3f599078885a", 00:16:25.379 "is_configured": true, 00:16:25.379 "data_offset": 2048, 00:16:25.379 "data_size": 63488 00:16:25.379 } 00:16:25.379 ] 00:16:25.379 }' 00:16:25.379 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.379 15:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.379 15:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:25.639 [2024-07-23 15:10:21.022486] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.639 [2024-07-23 15:10:21.022692] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.639 [2024-07-23 15:10:21.025277] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.639 [2024-07-23 15:10:21.025427] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.639 [2024-07-23 15:10:21.025551] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.639 [2024-07-23 15:10:21.025698] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state offline 00:16:25.639 0 00:16:25.639 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 90998 00:16:25.639 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 90998 ']' 00:16:25.639 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 90998 00:16:25.639 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:25.639 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.639 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90998 00:16:25.897 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:25.897 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:25.897 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90998' 00:16:25.897 killing process with pid 90998 00:16:25.897 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 90998 00:16:25.897 [2024-07-23 15:10:21.083283] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.897 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 90998 00:16:25.897 [2024-07-23 15:10:21.099316] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.156 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.sss9LvIfhx 00:16:26.156 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:26.156 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:26.156 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:26.156 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:26.156 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:26.156 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:26.157 15:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:26.157 00:16:26.157 real 0m5.053s 00:16:26.157 user 0m7.403s 00:16:26.157 sys 0m0.896s 00:16:26.157 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.157 15:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.157 ************************************ 00:16:26.157 END TEST raid_read_error_test 00:16:26.157 ************************************ 00:16:26.157 15:10:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:26.157 15:10:21 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:26.157 15:10:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:26.157 15:10:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.157 15:10:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.157 ************************************ 00:16:26.157 START TEST raid_write_error_test 00:16:26.157 ************************************ 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.L2u1VvpVqk 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=91158 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 91158 /var/tmp/spdk-raid.sock 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 91158 ']' 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.157 15:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.157 [2024-07-23 15:10:21.497335] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:16:26.157 [2024-07-23 15:10:21.497570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91158 ] 00:16:26.416 [2024-07-23 15:10:21.650035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.416 [2024-07-23 15:10:21.694228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.416 [2024-07-23 15:10:21.740106] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.983 15:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.983 15:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:26.983 15:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:26.983 15:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:27.241 BaseBdev1_malloc 00:16:27.241 15:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:27.241 true 00:16:27.500 15:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:27.500 [2024-07-23 15:10:22.824056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:27.500 [2024-07-23 15:10:22.824136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.500 [2024-07-23 15:10:22.824169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:16:27.500 [2024-07-23 15:10:22.824182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.500 [2024-07-23 15:10:22.826768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.500 [2024-07-23 15:10:22.826821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:27.500 BaseBdev1 00:16:27.500 15:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:27.500 15:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:27.758 BaseBdev2_malloc 00:16:27.758 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:28.017 true 00:16:28.017 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:28.017 [2024-07-23 15:10:23.417765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:28.017 [2024-07-23 15:10:23.417851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.017 [2024-07-23 15:10:23.417884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:16:28.017 [2024-07-23 15:10:23.417897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.017 [2024-07-23 15:10:23.420585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.017 [2024-07-23 15:10:23.420628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:28.017 BaseBdev2 00:16:28.017 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:28.276 [2024-07-23 15:10:23.597838] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.276 [2024-07-23 15:10:23.600294] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.276 [2024-07-23 15:10:23.600512] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007280 00:16:28.276 [2024-07-23 15:10:23.600527] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:28.276 [2024-07-23 15:10:23.600655] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:16:28.276 [2024-07-23 15:10:23.601037] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007280 00:16:28.276 [2024-07-23 15:10:23.601063] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007280 00:16:28.276 [2024-07-23 15:10:23.601207] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.276 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.535 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.535 "name": "raid_bdev1", 00:16:28.535 "uuid": "c6fe1c1e-a5e0-4dae-b7a6-058c9dd45862", 00:16:28.535 "strip_size_kb": 0, 00:16:28.535 "state": "online", 00:16:28.535 "raid_level": "raid1", 00:16:28.535 "superblock": true, 00:16:28.535 "num_base_bdevs": 2, 00:16:28.535 "num_base_bdevs_discovered": 2, 00:16:28.535 "num_base_bdevs_operational": 2, 00:16:28.535 "base_bdevs_list": [ 00:16:28.535 { 00:16:28.535 "name": "BaseBdev1", 00:16:28.535 "uuid": "56278f54-5c4d-55d2-a166-88dd90fdef3b", 00:16:28.535 "is_configured": true, 00:16:28.535 "data_offset": 2048, 00:16:28.535 "data_size": 63488 00:16:28.535 }, 00:16:28.535 { 00:16:28.535 "name": "BaseBdev2", 00:16:28.535 "uuid": "2dc1b930-d710-5c0c-b4ee-a2eec8f79da8", 00:16:28.535 "is_configured": true, 00:16:28.535 "data_offset": 2048, 00:16:28.535 "data_size": 63488 00:16:28.535 } 00:16:28.535 ] 00:16:28.535 }' 00:16:28.535 15:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.535 15:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.795 15:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:28.795 15:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:28.795 [2024-07-23 15:10:24.206330] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:16:29.730 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:29.989 [2024-07-23 15:10:25.354766] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:29.989 [2024-07-23 15:10:25.354860] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:29.989 [2024-07-23 15:10:25.355076] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000002120 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.989 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.249 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.249 "name": "raid_bdev1", 00:16:30.249 "uuid": "c6fe1c1e-a5e0-4dae-b7a6-058c9dd45862", 00:16:30.249 "strip_size_kb": 0, 00:16:30.249 "state": "online", 00:16:30.249 "raid_level": "raid1", 00:16:30.249 "superblock": true, 00:16:30.249 "num_base_bdevs": 2, 00:16:30.249 "num_base_bdevs_discovered": 1, 00:16:30.249 "num_base_bdevs_operational": 1, 00:16:30.249 "base_bdevs_list": [ 00:16:30.249 { 00:16:30.249 "name": null, 00:16:30.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.249 "is_configured": false, 00:16:30.249 "data_offset": 2048, 00:16:30.249 "data_size": 63488 00:16:30.249 }, 00:16:30.249 { 00:16:30.249 "name": "BaseBdev2", 00:16:30.249 "uuid": "2dc1b930-d710-5c0c-b4ee-a2eec8f79da8", 00:16:30.249 "is_configured": true, 00:16:30.249 "data_offset": 2048, 00:16:30.249 "data_size": 63488 00:16:30.249 } 00:16:30.249 ] 00:16:30.249 }' 00:16:30.249 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.249 15:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.817 15:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:30.817 [2024-07-23 15:10:26.246422] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.817 [2024-07-23 15:10:26.246480] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.076 [2024-07-23 15:10:26.248971] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.076 [2024-07-23 15:10:26.249055] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.076 [2024-07-23 15:10:26.249121] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.076 [2024-07-23 15:10:26.249138] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state offline 00:16:31.076 0 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 91158 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 91158 ']' 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 91158 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91158 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:31.076 killing process with pid 91158 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91158' 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 91158 00:16:31.076 [2024-07-23 15:10:26.304386] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:31.076 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 91158 00:16:31.076 [2024-07-23 15:10:26.320096] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.L2u1VvpVqk 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:31.336 00:16:31.336 real 0m5.156s 00:16:31.336 user 0m7.598s 00:16:31.336 sys 0m0.923s 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:31.336 15:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.336 ************************************ 00:16:31.336 END TEST raid_write_error_test 00:16:31.336 ************************************ 00:16:31.336 15:10:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:31.336 15:10:26 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:31.336 15:10:26 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:31.336 15:10:26 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:31.336 15:10:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:31.336 15:10:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.336 15:10:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.336 ************************************ 00:16:31.336 START TEST raid_state_function_test 00:16:31.336 ************************************ 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:31.336 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=91311 00:16:31.337 Process raid pid: 91311 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 91311' 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 91311 /var/tmp/spdk-raid.sock 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 91311 ']' 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.337 15:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.337 [2024-07-23 15:10:26.707654] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:16:31.337 [2024-07-23 15:10:26.707907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.596 [2024-07-23 15:10:26.862075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.596 [2024-07-23 15:10:26.910997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.596 [2024-07-23 15:10:26.956777] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.533 15:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.533 15:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:32.533 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:32.533 [2024-07-23 15:10:27.814937] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.533 [2024-07-23 15:10:27.815002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.534 [2024-07-23 15:10:27.815014] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.534 [2024-07-23 15:10:27.815028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.534 [2024-07-23 15:10:27.815040] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.534 [2024-07-23 15:10:27.815054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.534 15:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.792 15:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.792 "name": "Existed_Raid", 00:16:32.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.792 "strip_size_kb": 64, 00:16:32.792 "state": "configuring", 00:16:32.792 "raid_level": "raid0", 00:16:32.792 "superblock": false, 00:16:32.792 "num_base_bdevs": 3, 00:16:32.792 "num_base_bdevs_discovered": 0, 00:16:32.792 "num_base_bdevs_operational": 3, 00:16:32.792 "base_bdevs_list": [ 00:16:32.792 { 00:16:32.792 "name": "BaseBdev1", 00:16:32.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.792 "is_configured": false, 00:16:32.792 "data_offset": 0, 00:16:32.792 "data_size": 0 00:16:32.792 }, 00:16:32.792 { 00:16:32.792 "name": "BaseBdev2", 00:16:32.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.792 "is_configured": false, 00:16:32.792 "data_offset": 0, 00:16:32.792 "data_size": 0 00:16:32.792 }, 00:16:32.792 { 00:16:32.792 "name": "BaseBdev3", 00:16:32.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.792 "is_configured": false, 00:16:32.792 "data_offset": 0, 00:16:32.792 "data_size": 0 00:16:32.793 } 00:16:32.793 ] 00:16:32.793 }' 00:16:32.793 15:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.793 15:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.052 15:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:33.311 [2024-07-23 15:10:28.563012] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.311 [2024-07-23 15:10:28.563070] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:16:33.311 15:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:33.571 [2024-07-23 15:10:28.743086] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.571 [2024-07-23 15:10:28.743151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.571 [2024-07-23 15:10:28.743162] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.571 [2024-07-23 15:10:28.743193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.571 [2024-07-23 15:10:28.743201] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.571 [2024-07-23 15:10:28.743215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.571 15:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.571 BaseBdev1 00:16:33.571 [2024-07-23 15:10:28.988904] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.831 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:33.831 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:33.831 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:33.831 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:33.831 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:33.831 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:33.831 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.831 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.091 [ 00:16:34.091 { 00:16:34.091 "name": "BaseBdev1", 00:16:34.091 "aliases": [ 00:16:34.091 "6d22e643-41d1-4e31-901b-7b9b3a30b5c5" 00:16:34.091 ], 00:16:34.091 "product_name": "Malloc disk", 00:16:34.091 "block_size": 512, 00:16:34.091 "num_blocks": 65536, 00:16:34.091 "uuid": "6d22e643-41d1-4e31-901b-7b9b3a30b5c5", 00:16:34.091 "assigned_rate_limits": { 00:16:34.091 "rw_ios_per_sec": 0, 00:16:34.091 "rw_mbytes_per_sec": 0, 00:16:34.091 "r_mbytes_per_sec": 0, 00:16:34.091 "w_mbytes_per_sec": 0 00:16:34.091 }, 00:16:34.091 "claimed": true, 00:16:34.091 "claim_type": "exclusive_write", 00:16:34.091 "zoned": false, 00:16:34.091 "supported_io_types": { 00:16:34.091 "read": true, 00:16:34.091 "write": true, 00:16:34.091 "unmap": true, 00:16:34.091 "flush": true, 00:16:34.091 "reset": true, 00:16:34.091 "nvme_admin": false, 00:16:34.091 "nvme_io": false, 00:16:34.091 "nvme_io_md": false, 00:16:34.091 "write_zeroes": true, 00:16:34.091 "zcopy": true, 00:16:34.091 "get_zone_info": false, 00:16:34.091 "zone_management": false, 00:16:34.091 "zone_append": false, 00:16:34.091 "compare": false, 00:16:34.091 "compare_and_write": false, 00:16:34.091 "abort": true, 00:16:34.091 "seek_hole": false, 00:16:34.091 "seek_data": false, 00:16:34.091 "copy": true, 00:16:34.091 "nvme_iov_md": false 00:16:34.091 }, 00:16:34.091 "memory_domains": [ 00:16:34.091 { 00:16:34.091 "dma_device_id": "system", 00:16:34.091 "dma_device_type": 1 00:16:34.091 }, 00:16:34.091 { 00:16:34.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.091 "dma_device_type": 2 00:16:34.091 } 00:16:34.091 ], 00:16:34.091 "driver_specific": {} 00:16:34.091 } 00:16:34.091 ] 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.091 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.350 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.350 "name": "Existed_Raid", 00:16:34.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.350 "strip_size_kb": 64, 00:16:34.350 "state": "configuring", 00:16:34.350 "raid_level": "raid0", 00:16:34.350 "superblock": false, 00:16:34.350 "num_base_bdevs": 3, 00:16:34.350 "num_base_bdevs_discovered": 1, 00:16:34.350 "num_base_bdevs_operational": 3, 00:16:34.350 "base_bdevs_list": [ 00:16:34.350 { 00:16:34.350 "name": "BaseBdev1", 00:16:34.350 "uuid": "6d22e643-41d1-4e31-901b-7b9b3a30b5c5", 00:16:34.350 "is_configured": true, 00:16:34.350 "data_offset": 0, 00:16:34.350 "data_size": 65536 00:16:34.350 }, 00:16:34.350 { 00:16:34.350 "name": "BaseBdev2", 00:16:34.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.350 "is_configured": false, 00:16:34.350 "data_offset": 0, 00:16:34.350 "data_size": 0 00:16:34.350 }, 00:16:34.350 { 00:16:34.350 "name": "BaseBdev3", 00:16:34.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.350 "is_configured": false, 00:16:34.350 "data_offset": 0, 00:16:34.350 "data_size": 0 00:16:34.350 } 00:16:34.350 ] 00:16:34.350 }' 00:16:34.350 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.350 15:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.610 15:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.869 [2024-07-23 15:10:30.201307] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.869 [2024-07-23 15:10:30.201382] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:16:34.869 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:35.128 [2024-07-23 15:10:30.381427] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.128 [2024-07-23 15:10:30.383662] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.128 [2024-07-23 15:10:30.383714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.128 [2024-07-23 15:10:30.383726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.128 [2024-07-23 15:10:30.383740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.128 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:35.128 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:35.128 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:35.128 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.128 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.128 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:35.129 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.129 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:35.129 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.129 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.129 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.129 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.129 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.129 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.388 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.388 "name": "Existed_Raid", 00:16:35.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.388 "strip_size_kb": 64, 00:16:35.388 "state": "configuring", 00:16:35.388 "raid_level": "raid0", 00:16:35.388 "superblock": false, 00:16:35.388 "num_base_bdevs": 3, 00:16:35.388 "num_base_bdevs_discovered": 1, 00:16:35.388 "num_base_bdevs_operational": 3, 00:16:35.388 "base_bdevs_list": [ 00:16:35.388 { 00:16:35.388 "name": "BaseBdev1", 00:16:35.388 "uuid": "6d22e643-41d1-4e31-901b-7b9b3a30b5c5", 00:16:35.388 "is_configured": true, 00:16:35.388 "data_offset": 0, 00:16:35.388 "data_size": 65536 00:16:35.388 }, 00:16:35.388 { 00:16:35.388 "name": "BaseBdev2", 00:16:35.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.388 "is_configured": false, 00:16:35.388 "data_offset": 0, 00:16:35.388 "data_size": 0 00:16:35.388 }, 00:16:35.388 { 00:16:35.388 "name": "BaseBdev3", 00:16:35.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.388 "is_configured": false, 00:16:35.388 "data_offset": 0, 00:16:35.388 "data_size": 0 00:16:35.388 } 00:16:35.388 ] 00:16:35.388 }' 00:16:35.388 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.388 15:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.647 15:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.906 [2024-07-23 15:10:31.160432] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.906 BaseBdev2 00:16:35.906 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:35.906 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:35.906 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.906 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:35.906 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.906 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.906 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.165 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.425 [ 00:16:36.425 { 00:16:36.425 "name": "BaseBdev2", 00:16:36.425 "aliases": [ 00:16:36.425 "c545e090-207f-4439-828b-8db42dbe3765" 00:16:36.425 ], 00:16:36.425 "product_name": "Malloc disk", 00:16:36.425 "block_size": 512, 00:16:36.425 "num_blocks": 65536, 00:16:36.425 "uuid": "c545e090-207f-4439-828b-8db42dbe3765", 00:16:36.425 "assigned_rate_limits": { 00:16:36.425 "rw_ios_per_sec": 0, 00:16:36.425 "rw_mbytes_per_sec": 0, 00:16:36.425 "r_mbytes_per_sec": 0, 00:16:36.425 "w_mbytes_per_sec": 0 00:16:36.425 }, 00:16:36.425 "claimed": true, 00:16:36.425 "claim_type": "exclusive_write", 00:16:36.425 "zoned": false, 00:16:36.425 "supported_io_types": { 00:16:36.425 "read": true, 00:16:36.425 "write": true, 00:16:36.425 "unmap": true, 00:16:36.425 "flush": true, 00:16:36.425 "reset": true, 00:16:36.425 "nvme_admin": false, 00:16:36.425 "nvme_io": false, 00:16:36.425 "nvme_io_md": false, 00:16:36.425 "write_zeroes": true, 00:16:36.425 "zcopy": true, 00:16:36.425 "get_zone_info": false, 00:16:36.425 "zone_management": false, 00:16:36.425 "zone_append": false, 00:16:36.425 "compare": false, 00:16:36.425 "compare_and_write": false, 00:16:36.425 "abort": true, 00:16:36.425 "seek_hole": false, 00:16:36.425 "seek_data": false, 00:16:36.425 "copy": true, 00:16:36.425 "nvme_iov_md": false 00:16:36.425 }, 00:16:36.425 "memory_domains": [ 00:16:36.425 { 00:16:36.425 "dma_device_id": "system", 00:16:36.425 "dma_device_type": 1 00:16:36.425 }, 00:16:36.425 { 00:16:36.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.425 "dma_device_type": 2 00:16:36.425 } 00:16:36.425 ], 00:16:36.425 "driver_specific": {} 00:16:36.425 } 00:16:36.425 ] 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.425 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.685 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.685 "name": "Existed_Raid", 00:16:36.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.685 "strip_size_kb": 64, 00:16:36.685 "state": "configuring", 00:16:36.685 "raid_level": "raid0", 00:16:36.685 "superblock": false, 00:16:36.685 "num_base_bdevs": 3, 00:16:36.685 "num_base_bdevs_discovered": 2, 00:16:36.685 "num_base_bdevs_operational": 3, 00:16:36.685 "base_bdevs_list": [ 00:16:36.685 { 00:16:36.685 "name": "BaseBdev1", 00:16:36.685 "uuid": "6d22e643-41d1-4e31-901b-7b9b3a30b5c5", 00:16:36.685 "is_configured": true, 00:16:36.685 "data_offset": 0, 00:16:36.685 "data_size": 65536 00:16:36.685 }, 00:16:36.685 { 00:16:36.685 "name": "BaseBdev2", 00:16:36.685 "uuid": "c545e090-207f-4439-828b-8db42dbe3765", 00:16:36.685 "is_configured": true, 00:16:36.685 "data_offset": 0, 00:16:36.685 "data_size": 65536 00:16:36.685 }, 00:16:36.685 { 00:16:36.685 "name": "BaseBdev3", 00:16:36.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.685 "is_configured": false, 00:16:36.685 "data_offset": 0, 00:16:36.685 "data_size": 0 00:16:36.685 } 00:16:36.685 ] 00:16:36.685 }' 00:16:36.685 15:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.685 15:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.945 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.204 [2024-07-23 15:10:32.496342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.204 [2024-07-23 15:10:32.496392] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:16:37.204 [2024-07-23 15:10:32.496409] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:37.204 [2024-07-23 15:10:32.496504] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:16:37.204 [2024-07-23 15:10:32.496871] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:16:37.204 [2024-07-23 15:10:32.496899] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:16:37.204 [2024-07-23 15:10:32.497131] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.204 BaseBdev3 00:16:37.204 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:37.204 15:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:37.204 15:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:37.204 15:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:37.204 15:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:37.204 15:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:37.204 15:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.464 15:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.723 [ 00:16:37.723 { 00:16:37.723 "name": "BaseBdev3", 00:16:37.723 "aliases": [ 00:16:37.723 "91826d64-7375-475d-b9c6-17b1dd04df21" 00:16:37.723 ], 00:16:37.723 "product_name": "Malloc disk", 00:16:37.723 "block_size": 512, 00:16:37.723 "num_blocks": 65536, 00:16:37.723 "uuid": "91826d64-7375-475d-b9c6-17b1dd04df21", 00:16:37.723 "assigned_rate_limits": { 00:16:37.723 "rw_ios_per_sec": 0, 00:16:37.723 "rw_mbytes_per_sec": 0, 00:16:37.723 "r_mbytes_per_sec": 0, 00:16:37.723 "w_mbytes_per_sec": 0 00:16:37.723 }, 00:16:37.723 "claimed": true, 00:16:37.723 "claim_type": "exclusive_write", 00:16:37.723 "zoned": false, 00:16:37.723 "supported_io_types": { 00:16:37.723 "read": true, 00:16:37.723 "write": true, 00:16:37.723 "unmap": true, 00:16:37.723 "flush": true, 00:16:37.723 "reset": true, 00:16:37.723 "nvme_admin": false, 00:16:37.723 "nvme_io": false, 00:16:37.723 "nvme_io_md": false, 00:16:37.723 "write_zeroes": true, 00:16:37.723 "zcopy": true, 00:16:37.723 "get_zone_info": false, 00:16:37.723 "zone_management": false, 00:16:37.723 "zone_append": false, 00:16:37.723 "compare": false, 00:16:37.723 "compare_and_write": false, 00:16:37.723 "abort": true, 00:16:37.723 "seek_hole": false, 00:16:37.723 "seek_data": false, 00:16:37.723 "copy": true, 00:16:37.723 "nvme_iov_md": false 00:16:37.723 }, 00:16:37.723 "memory_domains": [ 00:16:37.723 { 00:16:37.723 "dma_device_id": "system", 00:16:37.723 "dma_device_type": 1 00:16:37.723 }, 00:16:37.723 { 00:16:37.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.723 "dma_device_type": 2 00:16:37.723 } 00:16:37.723 ], 00:16:37.723 "driver_specific": {} 00:16:37.723 } 00:16:37.723 ] 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.723 15:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.983 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.983 "name": "Existed_Raid", 00:16:37.983 "uuid": "08306680-335f-4349-9a8d-6f62e3fbbb2b", 00:16:37.983 "strip_size_kb": 64, 00:16:37.983 "state": "online", 00:16:37.983 "raid_level": "raid0", 00:16:37.983 "superblock": false, 00:16:37.983 "num_base_bdevs": 3, 00:16:37.983 "num_base_bdevs_discovered": 3, 00:16:37.983 "num_base_bdevs_operational": 3, 00:16:37.983 "base_bdevs_list": [ 00:16:37.983 { 00:16:37.983 "name": "BaseBdev1", 00:16:37.983 "uuid": "6d22e643-41d1-4e31-901b-7b9b3a30b5c5", 00:16:37.983 "is_configured": true, 00:16:37.983 "data_offset": 0, 00:16:37.983 "data_size": 65536 00:16:37.983 }, 00:16:37.983 { 00:16:37.983 "name": "BaseBdev2", 00:16:37.983 "uuid": "c545e090-207f-4439-828b-8db42dbe3765", 00:16:37.983 "is_configured": true, 00:16:37.983 "data_offset": 0, 00:16:37.983 "data_size": 65536 00:16:37.983 }, 00:16:37.983 { 00:16:37.983 "name": "BaseBdev3", 00:16:37.983 "uuid": "91826d64-7375-475d-b9c6-17b1dd04df21", 00:16:37.983 "is_configured": true, 00:16:37.983 "data_offset": 0, 00:16:37.983 "data_size": 65536 00:16:37.983 } 00:16:37.983 ] 00:16:37.983 }' 00:16:37.983 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.983 15:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.240 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.240 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:38.240 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:38.240 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:38.240 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:38.240 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:38.240 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:38.240 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:38.498 [2024-07-23 15:10:33.681006] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.498 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:38.498 "name": "Existed_Raid", 00:16:38.498 "aliases": [ 00:16:38.498 "08306680-335f-4349-9a8d-6f62e3fbbb2b" 00:16:38.498 ], 00:16:38.498 "product_name": "Raid Volume", 00:16:38.498 "block_size": 512, 00:16:38.498 "num_blocks": 196608, 00:16:38.498 "uuid": "08306680-335f-4349-9a8d-6f62e3fbbb2b", 00:16:38.498 "assigned_rate_limits": { 00:16:38.498 "rw_ios_per_sec": 0, 00:16:38.498 "rw_mbytes_per_sec": 0, 00:16:38.498 "r_mbytes_per_sec": 0, 00:16:38.498 "w_mbytes_per_sec": 0 00:16:38.498 }, 00:16:38.498 "claimed": false, 00:16:38.498 "zoned": false, 00:16:38.498 "supported_io_types": { 00:16:38.498 "read": true, 00:16:38.498 "write": true, 00:16:38.498 "unmap": true, 00:16:38.498 "flush": true, 00:16:38.498 "reset": true, 00:16:38.498 "nvme_admin": false, 00:16:38.498 "nvme_io": false, 00:16:38.498 "nvme_io_md": false, 00:16:38.498 "write_zeroes": true, 00:16:38.498 "zcopy": false, 00:16:38.498 "get_zone_info": false, 00:16:38.498 "zone_management": false, 00:16:38.498 "zone_append": false, 00:16:38.498 "compare": false, 00:16:38.498 "compare_and_write": false, 00:16:38.498 "abort": false, 00:16:38.498 "seek_hole": false, 00:16:38.498 "seek_data": false, 00:16:38.498 "copy": false, 00:16:38.498 "nvme_iov_md": false 00:16:38.498 }, 00:16:38.498 "memory_domains": [ 00:16:38.498 { 00:16:38.498 "dma_device_id": "system", 00:16:38.498 "dma_device_type": 1 00:16:38.498 }, 00:16:38.498 { 00:16:38.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.498 "dma_device_type": 2 00:16:38.498 }, 00:16:38.498 { 00:16:38.498 "dma_device_id": "system", 00:16:38.498 "dma_device_type": 1 00:16:38.498 }, 00:16:38.498 { 00:16:38.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.498 "dma_device_type": 2 00:16:38.498 }, 00:16:38.498 { 00:16:38.498 "dma_device_id": "system", 00:16:38.498 "dma_device_type": 1 00:16:38.498 }, 00:16:38.498 { 00:16:38.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.498 "dma_device_type": 2 00:16:38.498 } 00:16:38.498 ], 00:16:38.498 "driver_specific": { 00:16:38.498 "raid": { 00:16:38.498 "uuid": "08306680-335f-4349-9a8d-6f62e3fbbb2b", 00:16:38.498 "strip_size_kb": 64, 00:16:38.498 "state": "online", 00:16:38.498 "raid_level": "raid0", 00:16:38.498 "superblock": false, 00:16:38.498 "num_base_bdevs": 3, 00:16:38.498 "num_base_bdevs_discovered": 3, 00:16:38.498 "num_base_bdevs_operational": 3, 00:16:38.498 "base_bdevs_list": [ 00:16:38.498 { 00:16:38.498 "name": "BaseBdev1", 00:16:38.498 "uuid": "6d22e643-41d1-4e31-901b-7b9b3a30b5c5", 00:16:38.498 "is_configured": true, 00:16:38.498 "data_offset": 0, 00:16:38.498 "data_size": 65536 00:16:38.498 }, 00:16:38.498 { 00:16:38.498 "name": "BaseBdev2", 00:16:38.498 "uuid": "c545e090-207f-4439-828b-8db42dbe3765", 00:16:38.498 "is_configured": true, 00:16:38.498 "data_offset": 0, 00:16:38.498 "data_size": 65536 00:16:38.498 }, 00:16:38.498 { 00:16:38.498 "name": "BaseBdev3", 00:16:38.498 "uuid": "91826d64-7375-475d-b9c6-17b1dd04df21", 00:16:38.498 "is_configured": true, 00:16:38.498 "data_offset": 0, 00:16:38.498 "data_size": 65536 00:16:38.498 } 00:16:38.498 ] 00:16:38.498 } 00:16:38.498 } 00:16:38.498 }' 00:16:38.498 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.498 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:38.498 BaseBdev2 00:16:38.498 BaseBdev3' 00:16:38.498 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:38.498 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:38.498 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:38.762 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:38.762 "name": "BaseBdev1", 00:16:38.762 "aliases": [ 00:16:38.762 "6d22e643-41d1-4e31-901b-7b9b3a30b5c5" 00:16:38.762 ], 00:16:38.762 "product_name": "Malloc disk", 00:16:38.762 "block_size": 512, 00:16:38.762 "num_blocks": 65536, 00:16:38.762 "uuid": "6d22e643-41d1-4e31-901b-7b9b3a30b5c5", 00:16:38.762 "assigned_rate_limits": { 00:16:38.762 "rw_ios_per_sec": 0, 00:16:38.762 "rw_mbytes_per_sec": 0, 00:16:38.762 "r_mbytes_per_sec": 0, 00:16:38.762 "w_mbytes_per_sec": 0 00:16:38.762 }, 00:16:38.762 "claimed": true, 00:16:38.762 "claim_type": "exclusive_write", 00:16:38.762 "zoned": false, 00:16:38.762 "supported_io_types": { 00:16:38.762 "read": true, 00:16:38.762 "write": true, 00:16:38.762 "unmap": true, 00:16:38.762 "flush": true, 00:16:38.762 "reset": true, 00:16:38.762 "nvme_admin": false, 00:16:38.762 "nvme_io": false, 00:16:38.762 "nvme_io_md": false, 00:16:38.762 "write_zeroes": true, 00:16:38.762 "zcopy": true, 00:16:38.762 "get_zone_info": false, 00:16:38.762 "zone_management": false, 00:16:38.762 "zone_append": false, 00:16:38.762 "compare": false, 00:16:38.762 "compare_and_write": false, 00:16:38.762 "abort": true, 00:16:38.762 "seek_hole": false, 00:16:38.762 "seek_data": false, 00:16:38.762 "copy": true, 00:16:38.762 "nvme_iov_md": false 00:16:38.762 }, 00:16:38.762 "memory_domains": [ 00:16:38.762 { 00:16:38.762 "dma_device_id": "system", 00:16:38.762 "dma_device_type": 1 00:16:38.762 }, 00:16:38.762 { 00:16:38.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.762 "dma_device_type": 2 00:16:38.762 } 00:16:38.762 ], 00:16:38.762 "driver_specific": {} 00:16:38.762 }' 00:16:38.762 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.762 15:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:38.762 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.021 "name": "BaseBdev2", 00:16:39.021 "aliases": [ 00:16:39.021 "c545e090-207f-4439-828b-8db42dbe3765" 00:16:39.021 ], 00:16:39.021 "product_name": "Malloc disk", 00:16:39.021 "block_size": 512, 00:16:39.021 "num_blocks": 65536, 00:16:39.021 "uuid": "c545e090-207f-4439-828b-8db42dbe3765", 00:16:39.021 "assigned_rate_limits": { 00:16:39.021 "rw_ios_per_sec": 0, 00:16:39.021 "rw_mbytes_per_sec": 0, 00:16:39.021 "r_mbytes_per_sec": 0, 00:16:39.021 "w_mbytes_per_sec": 0 00:16:39.021 }, 00:16:39.021 "claimed": true, 00:16:39.021 "claim_type": "exclusive_write", 00:16:39.021 "zoned": false, 00:16:39.021 "supported_io_types": { 00:16:39.021 "read": true, 00:16:39.021 "write": true, 00:16:39.021 "unmap": true, 00:16:39.021 "flush": true, 00:16:39.021 "reset": true, 00:16:39.021 "nvme_admin": false, 00:16:39.021 "nvme_io": false, 00:16:39.021 "nvme_io_md": false, 00:16:39.021 "write_zeroes": true, 00:16:39.021 "zcopy": true, 00:16:39.021 "get_zone_info": false, 00:16:39.021 "zone_management": false, 00:16:39.021 "zone_append": false, 00:16:39.021 "compare": false, 00:16:39.021 "compare_and_write": false, 00:16:39.021 "abort": true, 00:16:39.021 "seek_hole": false, 00:16:39.021 "seek_data": false, 00:16:39.021 "copy": true, 00:16:39.021 "nvme_iov_md": false 00:16:39.021 }, 00:16:39.021 "memory_domains": [ 00:16:39.021 { 00:16:39.021 "dma_device_id": "system", 00:16:39.021 "dma_device_type": 1 00:16:39.021 }, 00:16:39.021 { 00:16:39.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.021 "dma_device_type": 2 00:16:39.021 } 00:16:39.021 ], 00:16:39.021 "driver_specific": {} 00:16:39.021 }' 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:39.021 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.022 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.022 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:39.022 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:39.022 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.022 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:39.281 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.281 "name": "BaseBdev3", 00:16:39.281 "aliases": [ 00:16:39.281 "91826d64-7375-475d-b9c6-17b1dd04df21" 00:16:39.281 ], 00:16:39.281 "product_name": "Malloc disk", 00:16:39.281 "block_size": 512, 00:16:39.281 "num_blocks": 65536, 00:16:39.281 "uuid": "91826d64-7375-475d-b9c6-17b1dd04df21", 00:16:39.281 "assigned_rate_limits": { 00:16:39.281 "rw_ios_per_sec": 0, 00:16:39.281 "rw_mbytes_per_sec": 0, 00:16:39.281 "r_mbytes_per_sec": 0, 00:16:39.281 "w_mbytes_per_sec": 0 00:16:39.281 }, 00:16:39.281 "claimed": true, 00:16:39.281 "claim_type": "exclusive_write", 00:16:39.281 "zoned": false, 00:16:39.281 "supported_io_types": { 00:16:39.281 "read": true, 00:16:39.281 "write": true, 00:16:39.281 "unmap": true, 00:16:39.281 "flush": true, 00:16:39.281 "reset": true, 00:16:39.281 "nvme_admin": false, 00:16:39.281 "nvme_io": false, 00:16:39.281 "nvme_io_md": false, 00:16:39.281 "write_zeroes": true, 00:16:39.281 "zcopy": true, 00:16:39.281 "get_zone_info": false, 00:16:39.281 "zone_management": false, 00:16:39.281 "zone_append": false, 00:16:39.281 "compare": false, 00:16:39.281 "compare_and_write": false, 00:16:39.281 "abort": true, 00:16:39.281 "seek_hole": false, 00:16:39.281 "seek_data": false, 00:16:39.281 "copy": true, 00:16:39.281 "nvme_iov_md": false 00:16:39.281 }, 00:16:39.281 "memory_domains": [ 00:16:39.281 { 00:16:39.281 "dma_device_id": "system", 00:16:39.281 "dma_device_type": 1 00:16:39.281 }, 00:16:39.281 { 00:16:39.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.281 "dma_device_type": 2 00:16:39.281 } 00:16:39.281 ], 00:16:39.281 "driver_specific": {} 00:16:39.281 }' 00:16:39.281 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:39.540 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:39.540 [2024-07-23 15:10:34.949070] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.540 [2024-07-23 15:10:34.949132] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.540 [2024-07-23 15:10:34.949204] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.799 15:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.799 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.799 "name": "Existed_Raid", 00:16:39.799 "uuid": "08306680-335f-4349-9a8d-6f62e3fbbb2b", 00:16:39.799 "strip_size_kb": 64, 00:16:39.799 "state": "offline", 00:16:39.800 "raid_level": "raid0", 00:16:39.800 "superblock": false, 00:16:39.800 "num_base_bdevs": 3, 00:16:39.800 "num_base_bdevs_discovered": 2, 00:16:39.800 "num_base_bdevs_operational": 2, 00:16:39.800 "base_bdevs_list": [ 00:16:39.800 { 00:16:39.800 "name": null, 00:16:39.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.800 "is_configured": false, 00:16:39.800 "data_offset": 0, 00:16:39.800 "data_size": 65536 00:16:39.800 }, 00:16:39.800 { 00:16:39.800 "name": "BaseBdev2", 00:16:39.800 "uuid": "c545e090-207f-4439-828b-8db42dbe3765", 00:16:39.800 "is_configured": true, 00:16:39.800 "data_offset": 0, 00:16:39.800 "data_size": 65536 00:16:39.800 }, 00:16:39.800 { 00:16:39.800 "name": "BaseBdev3", 00:16:39.800 "uuid": "91826d64-7375-475d-b9c6-17b1dd04df21", 00:16:39.800 "is_configured": true, 00:16:39.800 "data_offset": 0, 00:16:39.800 "data_size": 65536 00:16:39.800 } 00:16:39.800 ] 00:16:39.800 }' 00:16:39.800 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.800 15:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.368 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:40.368 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:40.368 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:40.368 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.368 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:40.368 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.368 15:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:40.628 [2024-07-23 15:10:36.013969] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.628 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:40.628 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:40.628 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.628 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:40.888 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:40.888 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.888 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:41.147 [2024-07-23 15:10:36.430644] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.147 [2024-07-23 15:10:36.430730] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:16:41.147 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:41.147 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:41.147 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.147 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:41.406 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:41.406 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:41.406 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:41.406 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:41.406 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:41.406 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:41.666 BaseBdev2 00:16:41.666 15:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:41.666 15:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:41.666 15:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.666 15:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:41.666 15:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.666 15:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.666 15:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.666 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:42.008 [ 00:16:42.008 { 00:16:42.008 "name": "BaseBdev2", 00:16:42.008 "aliases": [ 00:16:42.008 "8ccd5792-6b91-496f-90c5-6c2f2df6c858" 00:16:42.008 ], 00:16:42.008 "product_name": "Malloc disk", 00:16:42.008 "block_size": 512, 00:16:42.008 "num_blocks": 65536, 00:16:42.008 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:42.008 "assigned_rate_limits": { 00:16:42.008 "rw_ios_per_sec": 0, 00:16:42.008 "rw_mbytes_per_sec": 0, 00:16:42.008 "r_mbytes_per_sec": 0, 00:16:42.008 "w_mbytes_per_sec": 0 00:16:42.008 }, 00:16:42.008 "claimed": false, 00:16:42.008 "zoned": false, 00:16:42.008 "supported_io_types": { 00:16:42.008 "read": true, 00:16:42.008 "write": true, 00:16:42.008 "unmap": true, 00:16:42.008 "flush": true, 00:16:42.008 "reset": true, 00:16:42.008 "nvme_admin": false, 00:16:42.008 "nvme_io": false, 00:16:42.008 "nvme_io_md": false, 00:16:42.008 "write_zeroes": true, 00:16:42.008 "zcopy": true, 00:16:42.008 "get_zone_info": false, 00:16:42.008 "zone_management": false, 00:16:42.008 "zone_append": false, 00:16:42.008 "compare": false, 00:16:42.008 "compare_and_write": false, 00:16:42.008 "abort": true, 00:16:42.008 "seek_hole": false, 00:16:42.008 "seek_data": false, 00:16:42.008 "copy": true, 00:16:42.008 "nvme_iov_md": false 00:16:42.008 }, 00:16:42.008 "memory_domains": [ 00:16:42.008 { 00:16:42.008 "dma_device_id": "system", 00:16:42.008 "dma_device_type": 1 00:16:42.008 }, 00:16:42.008 { 00:16:42.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.008 "dma_device_type": 2 00:16:42.008 } 00:16:42.008 ], 00:16:42.008 "driver_specific": {} 00:16:42.008 } 00:16:42.008 ] 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:42.008 BaseBdev3 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:42.008 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.267 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:42.526 [ 00:16:42.526 { 00:16:42.526 "name": "BaseBdev3", 00:16:42.526 "aliases": [ 00:16:42.526 "88d9b1b2-9bbc-42b1-9540-6f0f740314b6" 00:16:42.526 ], 00:16:42.526 "product_name": "Malloc disk", 00:16:42.526 "block_size": 512, 00:16:42.526 "num_blocks": 65536, 00:16:42.526 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:42.526 "assigned_rate_limits": { 00:16:42.526 "rw_ios_per_sec": 0, 00:16:42.526 "rw_mbytes_per_sec": 0, 00:16:42.526 "r_mbytes_per_sec": 0, 00:16:42.526 "w_mbytes_per_sec": 0 00:16:42.526 }, 00:16:42.526 "claimed": false, 00:16:42.526 "zoned": false, 00:16:42.526 "supported_io_types": { 00:16:42.526 "read": true, 00:16:42.526 "write": true, 00:16:42.526 "unmap": true, 00:16:42.526 "flush": true, 00:16:42.526 "reset": true, 00:16:42.526 "nvme_admin": false, 00:16:42.526 "nvme_io": false, 00:16:42.526 "nvme_io_md": false, 00:16:42.526 "write_zeroes": true, 00:16:42.526 "zcopy": true, 00:16:42.526 "get_zone_info": false, 00:16:42.526 "zone_management": false, 00:16:42.526 "zone_append": false, 00:16:42.526 "compare": false, 00:16:42.526 "compare_and_write": false, 00:16:42.526 "abort": true, 00:16:42.526 "seek_hole": false, 00:16:42.526 "seek_data": false, 00:16:42.526 "copy": true, 00:16:42.526 "nvme_iov_md": false 00:16:42.526 }, 00:16:42.526 "memory_domains": [ 00:16:42.527 { 00:16:42.527 "dma_device_id": "system", 00:16:42.527 "dma_device_type": 1 00:16:42.527 }, 00:16:42.527 { 00:16:42.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.527 "dma_device_type": 2 00:16:42.527 } 00:16:42.527 ], 00:16:42.527 "driver_specific": {} 00:16:42.527 } 00:16:42.527 ] 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:42.527 [2024-07-23 15:10:37.930921] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.527 [2024-07-23 15:10:37.930990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.527 [2024-07-23 15:10:37.931029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.527 [2024-07-23 15:10:37.933324] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.527 15:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.786 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.786 "name": "Existed_Raid", 00:16:42.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.786 "strip_size_kb": 64, 00:16:42.786 "state": "configuring", 00:16:42.786 "raid_level": "raid0", 00:16:42.786 "superblock": false, 00:16:42.786 "num_base_bdevs": 3, 00:16:42.786 "num_base_bdevs_discovered": 2, 00:16:42.786 "num_base_bdevs_operational": 3, 00:16:42.786 "base_bdevs_list": [ 00:16:42.786 { 00:16:42.786 "name": "BaseBdev1", 00:16:42.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.786 "is_configured": false, 00:16:42.786 "data_offset": 0, 00:16:42.786 "data_size": 0 00:16:42.786 }, 00:16:42.786 { 00:16:42.786 "name": "BaseBdev2", 00:16:42.786 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:42.786 "is_configured": true, 00:16:42.786 "data_offset": 0, 00:16:42.786 "data_size": 65536 00:16:42.786 }, 00:16:42.786 { 00:16:42.786 "name": "BaseBdev3", 00:16:42.786 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:42.786 "is_configured": true, 00:16:42.786 "data_offset": 0, 00:16:42.786 "data_size": 65536 00:16:42.786 } 00:16:42.786 ] 00:16:42.786 }' 00:16:42.786 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.786 15:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.045 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:43.304 [2024-07-23 15:10:38.687088] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.304 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.562 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.562 "name": "Existed_Raid", 00:16:43.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.562 "strip_size_kb": 64, 00:16:43.562 "state": "configuring", 00:16:43.562 "raid_level": "raid0", 00:16:43.562 "superblock": false, 00:16:43.562 "num_base_bdevs": 3, 00:16:43.562 "num_base_bdevs_discovered": 1, 00:16:43.562 "num_base_bdevs_operational": 3, 00:16:43.562 "base_bdevs_list": [ 00:16:43.562 { 00:16:43.562 "name": "BaseBdev1", 00:16:43.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.562 "is_configured": false, 00:16:43.562 "data_offset": 0, 00:16:43.562 "data_size": 0 00:16:43.562 }, 00:16:43.562 { 00:16:43.562 "name": null, 00:16:43.562 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:43.562 "is_configured": false, 00:16:43.562 "data_offset": 0, 00:16:43.562 "data_size": 65536 00:16:43.562 }, 00:16:43.562 { 00:16:43.562 "name": "BaseBdev3", 00:16:43.562 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:43.562 "is_configured": true, 00:16:43.562 "data_offset": 0, 00:16:43.562 "data_size": 65536 00:16:43.562 } 00:16:43.562 ] 00:16:43.562 }' 00:16:43.563 15:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.563 15:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.130 15:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.130 15:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:44.130 15:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:44.130 15:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:44.389 [2024-07-23 15:10:39.714859] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.389 BaseBdev1 00:16:44.389 15:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:44.389 15:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:44.389 15:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:44.389 15:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:44.389 15:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:44.389 15:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:44.389 15:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.647 15:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:44.906 [ 00:16:44.906 { 00:16:44.906 "name": "BaseBdev1", 00:16:44.906 "aliases": [ 00:16:44.906 "5d54adc6-4cbc-4248-abba-acb6569c9353" 00:16:44.906 ], 00:16:44.906 "product_name": "Malloc disk", 00:16:44.906 "block_size": 512, 00:16:44.906 "num_blocks": 65536, 00:16:44.906 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:44.906 "assigned_rate_limits": { 00:16:44.906 "rw_ios_per_sec": 0, 00:16:44.906 "rw_mbytes_per_sec": 0, 00:16:44.906 "r_mbytes_per_sec": 0, 00:16:44.906 "w_mbytes_per_sec": 0 00:16:44.906 }, 00:16:44.906 "claimed": true, 00:16:44.906 "claim_type": "exclusive_write", 00:16:44.906 "zoned": false, 00:16:44.906 "supported_io_types": { 00:16:44.906 "read": true, 00:16:44.906 "write": true, 00:16:44.906 "unmap": true, 00:16:44.906 "flush": true, 00:16:44.906 "reset": true, 00:16:44.906 "nvme_admin": false, 00:16:44.906 "nvme_io": false, 00:16:44.906 "nvme_io_md": false, 00:16:44.906 "write_zeroes": true, 00:16:44.906 "zcopy": true, 00:16:44.906 "get_zone_info": false, 00:16:44.906 "zone_management": false, 00:16:44.906 "zone_append": false, 00:16:44.906 "compare": false, 00:16:44.906 "compare_and_write": false, 00:16:44.906 "abort": true, 00:16:44.906 "seek_hole": false, 00:16:44.906 "seek_data": false, 00:16:44.906 "copy": true, 00:16:44.906 "nvme_iov_md": false 00:16:44.906 }, 00:16:44.906 "memory_domains": [ 00:16:44.906 { 00:16:44.906 "dma_device_id": "system", 00:16:44.906 "dma_device_type": 1 00:16:44.906 }, 00:16:44.906 { 00:16:44.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.906 "dma_device_type": 2 00:16:44.906 } 00:16:44.906 ], 00:16:44.906 "driver_specific": {} 00:16:44.906 } 00:16:44.906 ] 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.906 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.165 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.165 "name": "Existed_Raid", 00:16:45.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.165 "strip_size_kb": 64, 00:16:45.165 "state": "configuring", 00:16:45.165 "raid_level": "raid0", 00:16:45.165 "superblock": false, 00:16:45.165 "num_base_bdevs": 3, 00:16:45.165 "num_base_bdevs_discovered": 2, 00:16:45.165 "num_base_bdevs_operational": 3, 00:16:45.165 "base_bdevs_list": [ 00:16:45.165 { 00:16:45.165 "name": "BaseBdev1", 00:16:45.165 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:45.165 "is_configured": true, 00:16:45.165 "data_offset": 0, 00:16:45.165 "data_size": 65536 00:16:45.165 }, 00:16:45.165 { 00:16:45.165 "name": null, 00:16:45.165 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:45.165 "is_configured": false, 00:16:45.165 "data_offset": 0, 00:16:45.165 "data_size": 65536 00:16:45.165 }, 00:16:45.165 { 00:16:45.165 "name": "BaseBdev3", 00:16:45.165 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:45.165 "is_configured": true, 00:16:45.165 "data_offset": 0, 00:16:45.165 "data_size": 65536 00:16:45.165 } 00:16:45.165 ] 00:16:45.165 }' 00:16:45.165 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.165 15:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.443 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.443 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:45.702 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:45.702 15:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:45.960 [2024-07-23 15:10:41.179301] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.960 "name": "Existed_Raid", 00:16:45.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.960 "strip_size_kb": 64, 00:16:45.960 "state": "configuring", 00:16:45.960 "raid_level": "raid0", 00:16:45.960 "superblock": false, 00:16:45.960 "num_base_bdevs": 3, 00:16:45.960 "num_base_bdevs_discovered": 1, 00:16:45.960 "num_base_bdevs_operational": 3, 00:16:45.960 "base_bdevs_list": [ 00:16:45.960 { 00:16:45.960 "name": "BaseBdev1", 00:16:45.960 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:45.960 "is_configured": true, 00:16:45.960 "data_offset": 0, 00:16:45.960 "data_size": 65536 00:16:45.960 }, 00:16:45.960 { 00:16:45.960 "name": null, 00:16:45.960 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:45.960 "is_configured": false, 00:16:45.960 "data_offset": 0, 00:16:45.960 "data_size": 65536 00:16:45.960 }, 00:16:45.960 { 00:16:45.960 "name": null, 00:16:45.960 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:45.960 "is_configured": false, 00:16:45.960 "data_offset": 0, 00:16:45.960 "data_size": 65536 00:16:45.960 } 00:16:45.960 ] 00:16:45.960 }' 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.960 15:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.527 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:46.527 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.527 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:46.527 15:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:46.786 [2024-07-23 15:10:42.123546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.786 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.045 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.045 "name": "Existed_Raid", 00:16:47.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.045 "strip_size_kb": 64, 00:16:47.045 "state": "configuring", 00:16:47.045 "raid_level": "raid0", 00:16:47.045 "superblock": false, 00:16:47.045 "num_base_bdevs": 3, 00:16:47.045 "num_base_bdevs_discovered": 2, 00:16:47.045 "num_base_bdevs_operational": 3, 00:16:47.045 "base_bdevs_list": [ 00:16:47.045 { 00:16:47.045 "name": "BaseBdev1", 00:16:47.045 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:47.045 "is_configured": true, 00:16:47.045 "data_offset": 0, 00:16:47.045 "data_size": 65536 00:16:47.045 }, 00:16:47.045 { 00:16:47.045 "name": null, 00:16:47.045 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:47.045 "is_configured": false, 00:16:47.045 "data_offset": 0, 00:16:47.045 "data_size": 65536 00:16:47.045 }, 00:16:47.045 { 00:16:47.045 "name": "BaseBdev3", 00:16:47.045 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:47.045 "is_configured": true, 00:16:47.045 "data_offset": 0, 00:16:47.045 "data_size": 65536 00:16:47.045 } 00:16:47.045 ] 00:16:47.045 }' 00:16:47.045 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.045 15:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.303 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.303 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:47.562 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:47.562 15:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:47.821 [2024-07-23 15:10:43.079762] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.821 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.080 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.080 "name": "Existed_Raid", 00:16:48.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.080 "strip_size_kb": 64, 00:16:48.080 "state": "configuring", 00:16:48.080 "raid_level": "raid0", 00:16:48.080 "superblock": false, 00:16:48.080 "num_base_bdevs": 3, 00:16:48.080 "num_base_bdevs_discovered": 1, 00:16:48.080 "num_base_bdevs_operational": 3, 00:16:48.080 "base_bdevs_list": [ 00:16:48.080 { 00:16:48.080 "name": null, 00:16:48.080 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:48.080 "is_configured": false, 00:16:48.080 "data_offset": 0, 00:16:48.080 "data_size": 65536 00:16:48.080 }, 00:16:48.080 { 00:16:48.080 "name": null, 00:16:48.080 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:48.080 "is_configured": false, 00:16:48.080 "data_offset": 0, 00:16:48.080 "data_size": 65536 00:16:48.080 }, 00:16:48.080 { 00:16:48.080 "name": "BaseBdev3", 00:16:48.080 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:48.080 "is_configured": true, 00:16:48.080 "data_offset": 0, 00:16:48.080 "data_size": 65536 00:16:48.080 } 00:16:48.080 ] 00:16:48.080 }' 00:16:48.080 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.080 15:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.338 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.338 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:48.596 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:48.596 15:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:48.854 [2024-07-23 15:10:44.120592] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.854 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:48.854 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.854 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.855 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.113 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.113 "name": "Existed_Raid", 00:16:49.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.113 "strip_size_kb": 64, 00:16:49.113 "state": "configuring", 00:16:49.113 "raid_level": "raid0", 00:16:49.113 "superblock": false, 00:16:49.113 "num_base_bdevs": 3, 00:16:49.113 "num_base_bdevs_discovered": 2, 00:16:49.113 "num_base_bdevs_operational": 3, 00:16:49.113 "base_bdevs_list": [ 00:16:49.113 { 00:16:49.113 "name": null, 00:16:49.113 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:49.113 "is_configured": false, 00:16:49.113 "data_offset": 0, 00:16:49.113 "data_size": 65536 00:16:49.113 }, 00:16:49.113 { 00:16:49.113 "name": "BaseBdev2", 00:16:49.113 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:49.113 "is_configured": true, 00:16:49.113 "data_offset": 0, 00:16:49.113 "data_size": 65536 00:16:49.113 }, 00:16:49.113 { 00:16:49.113 "name": "BaseBdev3", 00:16:49.113 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:49.113 "is_configured": true, 00:16:49.113 "data_offset": 0, 00:16:49.113 "data_size": 65536 00:16:49.113 } 00:16:49.113 ] 00:16:49.113 }' 00:16:49.113 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.113 15:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.372 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.372 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:49.632 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:49.632 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.632 15:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:49.891 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5d54adc6-4cbc-4248-abba-acb6569c9353 00:16:50.150 [2024-07-23 15:10:45.396455] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:50.150 [2024-07-23 15:10:45.396510] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:16:50.150 [2024-07-23 15:10:45.396522] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:50.150 [2024-07-23 15:10:45.396606] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:16:50.150 [2024-07-23 15:10:45.396905] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:16:50.150 [2024-07-23 15:10:45.396918] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007880 00:16:50.150 [2024-07-23 15:10:45.397125] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.150 NewBaseBdev 00:16:50.150 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:50.150 15:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:50.150 15:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:50.150 15:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:50.150 15:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:50.150 15:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:50.150 15:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:50.410 [ 00:16:50.410 { 00:16:50.410 "name": "NewBaseBdev", 00:16:50.410 "aliases": [ 00:16:50.410 "5d54adc6-4cbc-4248-abba-acb6569c9353" 00:16:50.410 ], 00:16:50.410 "product_name": "Malloc disk", 00:16:50.410 "block_size": 512, 00:16:50.410 "num_blocks": 65536, 00:16:50.410 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:50.410 "assigned_rate_limits": { 00:16:50.410 "rw_ios_per_sec": 0, 00:16:50.410 "rw_mbytes_per_sec": 0, 00:16:50.410 "r_mbytes_per_sec": 0, 00:16:50.410 "w_mbytes_per_sec": 0 00:16:50.410 }, 00:16:50.410 "claimed": true, 00:16:50.410 "claim_type": "exclusive_write", 00:16:50.410 "zoned": false, 00:16:50.410 "supported_io_types": { 00:16:50.410 "read": true, 00:16:50.410 "write": true, 00:16:50.410 "unmap": true, 00:16:50.410 "flush": true, 00:16:50.410 "reset": true, 00:16:50.410 "nvme_admin": false, 00:16:50.410 "nvme_io": false, 00:16:50.410 "nvme_io_md": false, 00:16:50.410 "write_zeroes": true, 00:16:50.410 "zcopy": true, 00:16:50.410 "get_zone_info": false, 00:16:50.410 "zone_management": false, 00:16:50.410 "zone_append": false, 00:16:50.410 "compare": false, 00:16:50.410 "compare_and_write": false, 00:16:50.410 "abort": true, 00:16:50.410 "seek_hole": false, 00:16:50.410 "seek_data": false, 00:16:50.410 "copy": true, 00:16:50.410 "nvme_iov_md": false 00:16:50.410 }, 00:16:50.410 "memory_domains": [ 00:16:50.410 { 00:16:50.410 "dma_device_id": "system", 00:16:50.410 "dma_device_type": 1 00:16:50.410 }, 00:16:50.410 { 00:16:50.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.410 "dma_device_type": 2 00:16:50.410 } 00:16:50.410 ], 00:16:50.410 "driver_specific": {} 00:16:50.410 } 00:16:50.410 ] 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.410 15:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.669 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.669 "name": "Existed_Raid", 00:16:50.669 "uuid": "e8b2701e-9eaa-4486-9211-5e5bf47f2c7d", 00:16:50.669 "strip_size_kb": 64, 00:16:50.669 "state": "online", 00:16:50.669 "raid_level": "raid0", 00:16:50.669 "superblock": false, 00:16:50.669 "num_base_bdevs": 3, 00:16:50.669 "num_base_bdevs_discovered": 3, 00:16:50.669 "num_base_bdevs_operational": 3, 00:16:50.669 "base_bdevs_list": [ 00:16:50.669 { 00:16:50.669 "name": "NewBaseBdev", 00:16:50.669 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:50.669 "is_configured": true, 00:16:50.669 "data_offset": 0, 00:16:50.669 "data_size": 65536 00:16:50.669 }, 00:16:50.669 { 00:16:50.669 "name": "BaseBdev2", 00:16:50.669 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:50.669 "is_configured": true, 00:16:50.669 "data_offset": 0, 00:16:50.670 "data_size": 65536 00:16:50.670 }, 00:16:50.670 { 00:16:50.670 "name": "BaseBdev3", 00:16:50.670 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:50.670 "is_configured": true, 00:16:50.670 "data_offset": 0, 00:16:50.670 "data_size": 65536 00:16:50.670 } 00:16:50.670 ] 00:16:50.670 }' 00:16:50.670 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.670 15:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.928 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.928 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:50.929 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:50.929 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:50.929 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:50.929 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:50.929 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:50.929 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:51.187 [2024-07-23 15:10:46.521110] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.187 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:51.187 "name": "Existed_Raid", 00:16:51.187 "aliases": [ 00:16:51.187 "e8b2701e-9eaa-4486-9211-5e5bf47f2c7d" 00:16:51.187 ], 00:16:51.187 "product_name": "Raid Volume", 00:16:51.187 "block_size": 512, 00:16:51.187 "num_blocks": 196608, 00:16:51.187 "uuid": "e8b2701e-9eaa-4486-9211-5e5bf47f2c7d", 00:16:51.187 "assigned_rate_limits": { 00:16:51.187 "rw_ios_per_sec": 0, 00:16:51.187 "rw_mbytes_per_sec": 0, 00:16:51.187 "r_mbytes_per_sec": 0, 00:16:51.187 "w_mbytes_per_sec": 0 00:16:51.187 }, 00:16:51.187 "claimed": false, 00:16:51.187 "zoned": false, 00:16:51.187 "supported_io_types": { 00:16:51.187 "read": true, 00:16:51.187 "write": true, 00:16:51.188 "unmap": true, 00:16:51.188 "flush": true, 00:16:51.188 "reset": true, 00:16:51.188 "nvme_admin": false, 00:16:51.188 "nvme_io": false, 00:16:51.188 "nvme_io_md": false, 00:16:51.188 "write_zeroes": true, 00:16:51.188 "zcopy": false, 00:16:51.188 "get_zone_info": false, 00:16:51.188 "zone_management": false, 00:16:51.188 "zone_append": false, 00:16:51.188 "compare": false, 00:16:51.188 "compare_and_write": false, 00:16:51.188 "abort": false, 00:16:51.188 "seek_hole": false, 00:16:51.188 "seek_data": false, 00:16:51.188 "copy": false, 00:16:51.188 "nvme_iov_md": false 00:16:51.188 }, 00:16:51.188 "memory_domains": [ 00:16:51.188 { 00:16:51.188 "dma_device_id": "system", 00:16:51.188 "dma_device_type": 1 00:16:51.188 }, 00:16:51.188 { 00:16:51.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.188 "dma_device_type": 2 00:16:51.188 }, 00:16:51.188 { 00:16:51.188 "dma_device_id": "system", 00:16:51.188 "dma_device_type": 1 00:16:51.188 }, 00:16:51.188 { 00:16:51.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.188 "dma_device_type": 2 00:16:51.188 }, 00:16:51.188 { 00:16:51.188 "dma_device_id": "system", 00:16:51.188 "dma_device_type": 1 00:16:51.188 }, 00:16:51.188 { 00:16:51.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.188 "dma_device_type": 2 00:16:51.188 } 00:16:51.188 ], 00:16:51.188 "driver_specific": { 00:16:51.188 "raid": { 00:16:51.188 "uuid": "e8b2701e-9eaa-4486-9211-5e5bf47f2c7d", 00:16:51.188 "strip_size_kb": 64, 00:16:51.188 "state": "online", 00:16:51.188 "raid_level": "raid0", 00:16:51.188 "superblock": false, 00:16:51.188 "num_base_bdevs": 3, 00:16:51.188 "num_base_bdevs_discovered": 3, 00:16:51.188 "num_base_bdevs_operational": 3, 00:16:51.188 "base_bdevs_list": [ 00:16:51.188 { 00:16:51.188 "name": "NewBaseBdev", 00:16:51.188 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:51.188 "is_configured": true, 00:16:51.188 "data_offset": 0, 00:16:51.188 "data_size": 65536 00:16:51.188 }, 00:16:51.188 { 00:16:51.188 "name": "BaseBdev2", 00:16:51.188 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:51.188 "is_configured": true, 00:16:51.188 "data_offset": 0, 00:16:51.188 "data_size": 65536 00:16:51.188 }, 00:16:51.188 { 00:16:51.188 "name": "BaseBdev3", 00:16:51.188 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:51.188 "is_configured": true, 00:16:51.188 "data_offset": 0, 00:16:51.188 "data_size": 65536 00:16:51.188 } 00:16:51.188 ] 00:16:51.188 } 00:16:51.188 } 00:16:51.188 }' 00:16:51.188 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.188 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:51.188 BaseBdev2 00:16:51.188 BaseBdev3' 00:16:51.188 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.188 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:51.188 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.447 "name": "NewBaseBdev", 00:16:51.447 "aliases": [ 00:16:51.447 "5d54adc6-4cbc-4248-abba-acb6569c9353" 00:16:51.447 ], 00:16:51.447 "product_name": "Malloc disk", 00:16:51.447 "block_size": 512, 00:16:51.447 "num_blocks": 65536, 00:16:51.447 "uuid": "5d54adc6-4cbc-4248-abba-acb6569c9353", 00:16:51.447 "assigned_rate_limits": { 00:16:51.447 "rw_ios_per_sec": 0, 00:16:51.447 "rw_mbytes_per_sec": 0, 00:16:51.447 "r_mbytes_per_sec": 0, 00:16:51.447 "w_mbytes_per_sec": 0 00:16:51.447 }, 00:16:51.447 "claimed": true, 00:16:51.447 "claim_type": "exclusive_write", 00:16:51.447 "zoned": false, 00:16:51.447 "supported_io_types": { 00:16:51.447 "read": true, 00:16:51.447 "write": true, 00:16:51.447 "unmap": true, 00:16:51.447 "flush": true, 00:16:51.447 "reset": true, 00:16:51.447 "nvme_admin": false, 00:16:51.447 "nvme_io": false, 00:16:51.447 "nvme_io_md": false, 00:16:51.447 "write_zeroes": true, 00:16:51.447 "zcopy": true, 00:16:51.447 "get_zone_info": false, 00:16:51.447 "zone_management": false, 00:16:51.447 "zone_append": false, 00:16:51.447 "compare": false, 00:16:51.447 "compare_and_write": false, 00:16:51.447 "abort": true, 00:16:51.447 "seek_hole": false, 00:16:51.447 "seek_data": false, 00:16:51.447 "copy": true, 00:16:51.447 "nvme_iov_md": false 00:16:51.447 }, 00:16:51.447 "memory_domains": [ 00:16:51.447 { 00:16:51.447 "dma_device_id": "system", 00:16:51.447 "dma_device_type": 1 00:16:51.447 }, 00:16:51.447 { 00:16:51.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.447 "dma_device_type": 2 00:16:51.447 } 00:16:51.447 ], 00:16:51.447 "driver_specific": {} 00:16:51.447 }' 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.447 15:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:51.705 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.705 "name": "BaseBdev2", 00:16:51.705 "aliases": [ 00:16:51.705 "8ccd5792-6b91-496f-90c5-6c2f2df6c858" 00:16:51.705 ], 00:16:51.705 "product_name": "Malloc disk", 00:16:51.705 "block_size": 512, 00:16:51.705 "num_blocks": 65536, 00:16:51.705 "uuid": "8ccd5792-6b91-496f-90c5-6c2f2df6c858", 00:16:51.705 "assigned_rate_limits": { 00:16:51.705 "rw_ios_per_sec": 0, 00:16:51.705 "rw_mbytes_per_sec": 0, 00:16:51.705 "r_mbytes_per_sec": 0, 00:16:51.705 "w_mbytes_per_sec": 0 00:16:51.705 }, 00:16:51.705 "claimed": true, 00:16:51.706 "claim_type": "exclusive_write", 00:16:51.706 "zoned": false, 00:16:51.706 "supported_io_types": { 00:16:51.706 "read": true, 00:16:51.706 "write": true, 00:16:51.706 "unmap": true, 00:16:51.706 "flush": true, 00:16:51.706 "reset": true, 00:16:51.706 "nvme_admin": false, 00:16:51.706 "nvme_io": false, 00:16:51.706 "nvme_io_md": false, 00:16:51.706 "write_zeroes": true, 00:16:51.706 "zcopy": true, 00:16:51.706 "get_zone_info": false, 00:16:51.706 "zone_management": false, 00:16:51.706 "zone_append": false, 00:16:51.706 "compare": false, 00:16:51.706 "compare_and_write": false, 00:16:51.706 "abort": true, 00:16:51.706 "seek_hole": false, 00:16:51.706 "seek_data": false, 00:16:51.706 "copy": true, 00:16:51.706 "nvme_iov_md": false 00:16:51.706 }, 00:16:51.706 "memory_domains": [ 00:16:51.706 { 00:16:51.706 "dma_device_id": "system", 00:16:51.706 "dma_device_type": 1 00:16:51.706 }, 00:16:51.706 { 00:16:51.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.706 "dma_device_type": 2 00:16:51.706 } 00:16:51.706 ], 00:16:51.706 "driver_specific": {} 00:16:51.706 }' 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:51.706 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.973 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.973 "name": "BaseBdev3", 00:16:51.973 "aliases": [ 00:16:51.973 "88d9b1b2-9bbc-42b1-9540-6f0f740314b6" 00:16:51.973 ], 00:16:51.973 "product_name": "Malloc disk", 00:16:51.973 "block_size": 512, 00:16:51.973 "num_blocks": 65536, 00:16:51.973 "uuid": "88d9b1b2-9bbc-42b1-9540-6f0f740314b6", 00:16:51.973 "assigned_rate_limits": { 00:16:51.973 "rw_ios_per_sec": 0, 00:16:51.973 "rw_mbytes_per_sec": 0, 00:16:51.973 "r_mbytes_per_sec": 0, 00:16:51.973 "w_mbytes_per_sec": 0 00:16:51.973 }, 00:16:51.973 "claimed": true, 00:16:51.973 "claim_type": "exclusive_write", 00:16:51.973 "zoned": false, 00:16:51.973 "supported_io_types": { 00:16:51.973 "read": true, 00:16:51.973 "write": true, 00:16:51.973 "unmap": true, 00:16:51.973 "flush": true, 00:16:51.973 "reset": true, 00:16:51.973 "nvme_admin": false, 00:16:51.973 "nvme_io": false, 00:16:51.973 "nvme_io_md": false, 00:16:51.973 "write_zeroes": true, 00:16:51.973 "zcopy": true, 00:16:51.973 "get_zone_info": false, 00:16:51.973 "zone_management": false, 00:16:51.973 "zone_append": false, 00:16:51.973 "compare": false, 00:16:51.973 "compare_and_write": false, 00:16:51.973 "abort": true, 00:16:51.973 "seek_hole": false, 00:16:51.973 "seek_data": false, 00:16:51.973 "copy": true, 00:16:51.973 "nvme_iov_md": false 00:16:51.973 }, 00:16:51.973 "memory_domains": [ 00:16:51.973 { 00:16:51.973 "dma_device_id": "system", 00:16:51.973 "dma_device_type": 1 00:16:51.973 }, 00:16:51.973 { 00:16:51.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.973 "dma_device_type": 2 00:16:51.973 } 00:16:51.973 ], 00:16:51.973 "driver_specific": {} 00:16:51.973 }' 00:16:51.973 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.973 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.266 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:52.524 [2024-07-23 15:10:47.713098] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.524 [2024-07-23 15:10:47.713141] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.524 [2024-07-23 15:10:47.713243] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.524 [2024-07-23 15:10:47.713327] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.524 [2024-07-23 15:10:47.713344] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name Existed_Raid, state offline 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 91311 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 91311 ']' 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 91311 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91311 00:16:52.524 killing process with pid 91311 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91311' 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 91311 00:16:52.524 [2024-07-23 15:10:47.773052] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.524 15:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 91311 00:16:52.524 [2024-07-23 15:10:47.808510] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.783 15:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:52.783 00:16:52.783 real 0m21.428s 00:16:52.783 user 0m37.466s 00:16:52.783 sys 0m4.542s 00:16:52.783 15:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:52.783 ************************************ 00:16:52.783 END TEST raid_state_function_test 00:16:52.783 ************************************ 00:16:52.783 15:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.783 15:10:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:52.783 15:10:48 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:52.783 15:10:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:52.783 15:10:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:52.784 15:10:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.784 ************************************ 00:16:52.784 START TEST raid_state_function_test_sb 00:16:52.784 ************************************ 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=92174 00:16:52.784 Process raid pid: 92174 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 92174' 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 92174 /var/tmp/spdk-raid.sock 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 92174 ']' 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.784 15:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.784 [2024-07-23 15:10:48.204254] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:16:52.784 [2024-07-23 15:10:48.204437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.042 [2024-07-23 15:10:48.355912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.042 [2024-07-23 15:10:48.405063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.042 [2024-07-23 15:10:48.450833] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:53.977 [2024-07-23 15:10:49.349289] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.977 [2024-07-23 15:10:49.349372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.977 [2024-07-23 15:10:49.349384] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.977 [2024-07-23 15:10:49.349398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.977 [2024-07-23 15:10:49.349409] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.977 [2024-07-23 15:10:49.349423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.977 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.235 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.235 "name": "Existed_Raid", 00:16:54.235 "uuid": "c4532dfa-f001-414e-a5c6-4ae2665d18e2", 00:16:54.235 "strip_size_kb": 64, 00:16:54.235 "state": "configuring", 00:16:54.235 "raid_level": "raid0", 00:16:54.235 "superblock": true, 00:16:54.235 "num_base_bdevs": 3, 00:16:54.235 "num_base_bdevs_discovered": 0, 00:16:54.235 "num_base_bdevs_operational": 3, 00:16:54.235 "base_bdevs_list": [ 00:16:54.235 { 00:16:54.235 "name": "BaseBdev1", 00:16:54.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.235 "is_configured": false, 00:16:54.235 "data_offset": 0, 00:16:54.235 "data_size": 0 00:16:54.235 }, 00:16:54.235 { 00:16:54.235 "name": "BaseBdev2", 00:16:54.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.235 "is_configured": false, 00:16:54.235 "data_offset": 0, 00:16:54.235 "data_size": 0 00:16:54.235 }, 00:16:54.235 { 00:16:54.235 "name": "BaseBdev3", 00:16:54.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.235 "is_configured": false, 00:16:54.235 "data_offset": 0, 00:16:54.235 "data_size": 0 00:16:54.235 } 00:16:54.235 ] 00:16:54.235 }' 00:16:54.235 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.235 15:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.494 15:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.751 [2024-07-23 15:10:50.169342] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.751 [2024-07-23 15:10:50.169407] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:16:55.010 15:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:55.010 [2024-07-23 15:10:50.409443] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.010 [2024-07-23 15:10:50.409527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.010 [2024-07-23 15:10:50.409538] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.010 [2024-07-23 15:10:50.409564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.010 [2024-07-23 15:10:50.409572] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.010 [2024-07-23 15:10:50.409585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.010 15:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.269 [2024-07-23 15:10:50.659252] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.269 BaseBdev1 00:16:55.269 15:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:55.269 15:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:55.269 15:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:55.269 15:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:55.269 15:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:55.269 15:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:55.269 15:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.526 15:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.785 [ 00:16:55.785 { 00:16:55.785 "name": "BaseBdev1", 00:16:55.785 "aliases": [ 00:16:55.785 "a0e05cb3-f37b-469a-a614-054f7bd84cc2" 00:16:55.785 ], 00:16:55.785 "product_name": "Malloc disk", 00:16:55.785 "block_size": 512, 00:16:55.785 "num_blocks": 65536, 00:16:55.785 "uuid": "a0e05cb3-f37b-469a-a614-054f7bd84cc2", 00:16:55.785 "assigned_rate_limits": { 00:16:55.785 "rw_ios_per_sec": 0, 00:16:55.785 "rw_mbytes_per_sec": 0, 00:16:55.785 "r_mbytes_per_sec": 0, 00:16:55.785 "w_mbytes_per_sec": 0 00:16:55.785 }, 00:16:55.785 "claimed": true, 00:16:55.785 "claim_type": "exclusive_write", 00:16:55.785 "zoned": false, 00:16:55.785 "supported_io_types": { 00:16:55.785 "read": true, 00:16:55.785 "write": true, 00:16:55.785 "unmap": true, 00:16:55.785 "flush": true, 00:16:55.785 "reset": true, 00:16:55.785 "nvme_admin": false, 00:16:55.785 "nvme_io": false, 00:16:55.785 "nvme_io_md": false, 00:16:55.785 "write_zeroes": true, 00:16:55.785 "zcopy": true, 00:16:55.785 "get_zone_info": false, 00:16:55.785 "zone_management": false, 00:16:55.785 "zone_append": false, 00:16:55.785 "compare": false, 00:16:55.785 "compare_and_write": false, 00:16:55.785 "abort": true, 00:16:55.785 "seek_hole": false, 00:16:55.785 "seek_data": false, 00:16:55.785 "copy": true, 00:16:55.785 "nvme_iov_md": false 00:16:55.785 }, 00:16:55.785 "memory_domains": [ 00:16:55.785 { 00:16:55.785 "dma_device_id": "system", 00:16:55.785 "dma_device_type": 1 00:16:55.785 }, 00:16:55.785 { 00:16:55.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.785 "dma_device_type": 2 00:16:55.785 } 00:16:55.785 ], 00:16:55.785 "driver_specific": {} 00:16:55.785 } 00:16:55.785 ] 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.785 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.044 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.044 "name": "Existed_Raid", 00:16:56.044 "uuid": "091bfc38-c2ef-49ce-b50a-4db897287f3b", 00:16:56.044 "strip_size_kb": 64, 00:16:56.044 "state": "configuring", 00:16:56.044 "raid_level": "raid0", 00:16:56.044 "superblock": true, 00:16:56.044 "num_base_bdevs": 3, 00:16:56.044 "num_base_bdevs_discovered": 1, 00:16:56.044 "num_base_bdevs_operational": 3, 00:16:56.044 "base_bdevs_list": [ 00:16:56.044 { 00:16:56.044 "name": "BaseBdev1", 00:16:56.044 "uuid": "a0e05cb3-f37b-469a-a614-054f7bd84cc2", 00:16:56.044 "is_configured": true, 00:16:56.044 "data_offset": 2048, 00:16:56.044 "data_size": 63488 00:16:56.044 }, 00:16:56.044 { 00:16:56.044 "name": "BaseBdev2", 00:16:56.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.044 "is_configured": false, 00:16:56.044 "data_offset": 0, 00:16:56.044 "data_size": 0 00:16:56.044 }, 00:16:56.044 { 00:16:56.044 "name": "BaseBdev3", 00:16:56.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.044 "is_configured": false, 00:16:56.044 "data_offset": 0, 00:16:56.044 "data_size": 0 00:16:56.044 } 00:16:56.044 ] 00:16:56.044 }' 00:16:56.044 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.044 15:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.303 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.303 [2024-07-23 15:10:51.691570] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.303 [2024-07-23 15:10:51.691636] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:16:56.303 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:56.561 [2024-07-23 15:10:51.859698] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.561 [2024-07-23 15:10:51.861916] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.561 [2024-07-23 15:10:51.861967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.561 [2024-07-23 15:10:51.861978] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.561 [2024-07-23 15:10:51.862015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.561 15:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.821 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.821 "name": "Existed_Raid", 00:16:56.821 "uuid": "3bc2e2a6-0925-4622-8c03-57974303c927", 00:16:56.821 "strip_size_kb": 64, 00:16:56.821 "state": "configuring", 00:16:56.821 "raid_level": "raid0", 00:16:56.821 "superblock": true, 00:16:56.821 "num_base_bdevs": 3, 00:16:56.821 "num_base_bdevs_discovered": 1, 00:16:56.821 "num_base_bdevs_operational": 3, 00:16:56.821 "base_bdevs_list": [ 00:16:56.821 { 00:16:56.821 "name": "BaseBdev1", 00:16:56.821 "uuid": "a0e05cb3-f37b-469a-a614-054f7bd84cc2", 00:16:56.821 "is_configured": true, 00:16:56.821 "data_offset": 2048, 00:16:56.821 "data_size": 63488 00:16:56.821 }, 00:16:56.821 { 00:16:56.821 "name": "BaseBdev2", 00:16:56.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.821 "is_configured": false, 00:16:56.821 "data_offset": 0, 00:16:56.821 "data_size": 0 00:16:56.821 }, 00:16:56.821 { 00:16:56.821 "name": "BaseBdev3", 00:16:56.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.821 "is_configured": false, 00:16:56.821 "data_offset": 0, 00:16:56.821 "data_size": 0 00:16:56.821 } 00:16:56.821 ] 00:16:56.821 }' 00:16:56.821 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.821 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.080 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.080 [2024-07-23 15:10:52.506441] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.080 BaseBdev2 00:16:57.339 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:57.339 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:57.339 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:57.339 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:57.339 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:57.339 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:57.339 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.339 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:57.599 [ 00:16:57.599 { 00:16:57.599 "name": "BaseBdev2", 00:16:57.599 "aliases": [ 00:16:57.599 "0ab4119d-f7cb-4877-beb8-4b74eae07341" 00:16:57.599 ], 00:16:57.599 "product_name": "Malloc disk", 00:16:57.599 "block_size": 512, 00:16:57.599 "num_blocks": 65536, 00:16:57.599 "uuid": "0ab4119d-f7cb-4877-beb8-4b74eae07341", 00:16:57.599 "assigned_rate_limits": { 00:16:57.599 "rw_ios_per_sec": 0, 00:16:57.599 "rw_mbytes_per_sec": 0, 00:16:57.599 "r_mbytes_per_sec": 0, 00:16:57.599 "w_mbytes_per_sec": 0 00:16:57.599 }, 00:16:57.599 "claimed": true, 00:16:57.599 "claim_type": "exclusive_write", 00:16:57.599 "zoned": false, 00:16:57.599 "supported_io_types": { 00:16:57.599 "read": true, 00:16:57.599 "write": true, 00:16:57.599 "unmap": true, 00:16:57.599 "flush": true, 00:16:57.599 "reset": true, 00:16:57.599 "nvme_admin": false, 00:16:57.599 "nvme_io": false, 00:16:57.599 "nvme_io_md": false, 00:16:57.599 "write_zeroes": true, 00:16:57.599 "zcopy": true, 00:16:57.599 "get_zone_info": false, 00:16:57.599 "zone_management": false, 00:16:57.599 "zone_append": false, 00:16:57.599 "compare": false, 00:16:57.599 "compare_and_write": false, 00:16:57.599 "abort": true, 00:16:57.599 "seek_hole": false, 00:16:57.599 "seek_data": false, 00:16:57.599 "copy": true, 00:16:57.599 "nvme_iov_md": false 00:16:57.599 }, 00:16:57.599 "memory_domains": [ 00:16:57.599 { 00:16:57.599 "dma_device_id": "system", 00:16:57.599 "dma_device_type": 1 00:16:57.599 }, 00:16:57.599 { 00:16:57.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.599 "dma_device_type": 2 00:16:57.599 } 00:16:57.599 ], 00:16:57.599 "driver_specific": {} 00:16:57.599 } 00:16:57.599 ] 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.599 15:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.858 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.858 "name": "Existed_Raid", 00:16:57.858 "uuid": "3bc2e2a6-0925-4622-8c03-57974303c927", 00:16:57.858 "strip_size_kb": 64, 00:16:57.858 "state": "configuring", 00:16:57.858 "raid_level": "raid0", 00:16:57.858 "superblock": true, 00:16:57.858 "num_base_bdevs": 3, 00:16:57.858 "num_base_bdevs_discovered": 2, 00:16:57.858 "num_base_bdevs_operational": 3, 00:16:57.858 "base_bdevs_list": [ 00:16:57.858 { 00:16:57.858 "name": "BaseBdev1", 00:16:57.858 "uuid": "a0e05cb3-f37b-469a-a614-054f7bd84cc2", 00:16:57.858 "is_configured": true, 00:16:57.858 "data_offset": 2048, 00:16:57.858 "data_size": 63488 00:16:57.858 }, 00:16:57.858 { 00:16:57.858 "name": "BaseBdev2", 00:16:57.858 "uuid": "0ab4119d-f7cb-4877-beb8-4b74eae07341", 00:16:57.858 "is_configured": true, 00:16:57.858 "data_offset": 2048, 00:16:57.858 "data_size": 63488 00:16:57.858 }, 00:16:57.858 { 00:16:57.858 "name": "BaseBdev3", 00:16:57.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.858 "is_configured": false, 00:16:57.858 "data_offset": 0, 00:16:57.858 "data_size": 0 00:16:57.858 } 00:16:57.858 ] 00:16:57.858 }' 00:16:57.858 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.858 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.115 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:58.372 [2024-07-23 15:10:53.622353] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.372 [2024-07-23 15:10:53.622567] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:16:58.372 [2024-07-23 15:10:53.622588] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:58.372 [2024-07-23 15:10:53.622692] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:16:58.372 [2024-07-23 15:10:53.623062] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:16:58.372 [2024-07-23 15:10:53.623083] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:16:58.372 [2024-07-23 15:10:53.623203] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.372 BaseBdev3 00:16:58.372 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:58.372 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:58.373 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:58.373 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:58.373 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:58.373 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:58.373 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:58.632 [ 00:16:58.632 { 00:16:58.632 "name": "BaseBdev3", 00:16:58.632 "aliases": [ 00:16:58.632 "17edc2bc-8d0e-460d-b07a-8abedfff8821" 00:16:58.632 ], 00:16:58.632 "product_name": "Malloc disk", 00:16:58.632 "block_size": 512, 00:16:58.632 "num_blocks": 65536, 00:16:58.632 "uuid": "17edc2bc-8d0e-460d-b07a-8abedfff8821", 00:16:58.632 "assigned_rate_limits": { 00:16:58.632 "rw_ios_per_sec": 0, 00:16:58.632 "rw_mbytes_per_sec": 0, 00:16:58.632 "r_mbytes_per_sec": 0, 00:16:58.632 "w_mbytes_per_sec": 0 00:16:58.632 }, 00:16:58.632 "claimed": true, 00:16:58.632 "claim_type": "exclusive_write", 00:16:58.632 "zoned": false, 00:16:58.632 "supported_io_types": { 00:16:58.632 "read": true, 00:16:58.632 "write": true, 00:16:58.632 "unmap": true, 00:16:58.632 "flush": true, 00:16:58.632 "reset": true, 00:16:58.632 "nvme_admin": false, 00:16:58.632 "nvme_io": false, 00:16:58.632 "nvme_io_md": false, 00:16:58.632 "write_zeroes": true, 00:16:58.632 "zcopy": true, 00:16:58.632 "get_zone_info": false, 00:16:58.632 "zone_management": false, 00:16:58.632 "zone_append": false, 00:16:58.632 "compare": false, 00:16:58.632 "compare_and_write": false, 00:16:58.632 "abort": true, 00:16:58.632 "seek_hole": false, 00:16:58.632 "seek_data": false, 00:16:58.632 "copy": true, 00:16:58.632 "nvme_iov_md": false 00:16:58.632 }, 00:16:58.632 "memory_domains": [ 00:16:58.632 { 00:16:58.632 "dma_device_id": "system", 00:16:58.632 "dma_device_type": 1 00:16:58.632 }, 00:16:58.632 { 00:16:58.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.632 "dma_device_type": 2 00:16:58.632 } 00:16:58.632 ], 00:16:58.632 "driver_specific": {} 00:16:58.632 } 00:16:58.632 ] 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.632 15:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.891 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.891 "name": "Existed_Raid", 00:16:58.891 "uuid": "3bc2e2a6-0925-4622-8c03-57974303c927", 00:16:58.891 "strip_size_kb": 64, 00:16:58.891 "state": "online", 00:16:58.891 "raid_level": "raid0", 00:16:58.891 "superblock": true, 00:16:58.891 "num_base_bdevs": 3, 00:16:58.891 "num_base_bdevs_discovered": 3, 00:16:58.891 "num_base_bdevs_operational": 3, 00:16:58.891 "base_bdevs_list": [ 00:16:58.891 { 00:16:58.891 "name": "BaseBdev1", 00:16:58.891 "uuid": "a0e05cb3-f37b-469a-a614-054f7bd84cc2", 00:16:58.891 "is_configured": true, 00:16:58.891 "data_offset": 2048, 00:16:58.891 "data_size": 63488 00:16:58.891 }, 00:16:58.891 { 00:16:58.891 "name": "BaseBdev2", 00:16:58.891 "uuid": "0ab4119d-f7cb-4877-beb8-4b74eae07341", 00:16:58.891 "is_configured": true, 00:16:58.891 "data_offset": 2048, 00:16:58.891 "data_size": 63488 00:16:58.891 }, 00:16:58.891 { 00:16:58.891 "name": "BaseBdev3", 00:16:58.891 "uuid": "17edc2bc-8d0e-460d-b07a-8abedfff8821", 00:16:58.891 "is_configured": true, 00:16:58.891 "data_offset": 2048, 00:16:58.891 "data_size": 63488 00:16:58.891 } 00:16:58.891 ] 00:16:58.891 }' 00:16:58.891 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.891 15:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.149 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.149 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:59.149 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:59.149 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:59.149 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:59.149 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:59.149 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:59.149 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:59.408 [2024-07-23 15:10:54.779032] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.408 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:59.408 "name": "Existed_Raid", 00:16:59.408 "aliases": [ 00:16:59.408 "3bc2e2a6-0925-4622-8c03-57974303c927" 00:16:59.408 ], 00:16:59.408 "product_name": "Raid Volume", 00:16:59.408 "block_size": 512, 00:16:59.408 "num_blocks": 190464, 00:16:59.408 "uuid": "3bc2e2a6-0925-4622-8c03-57974303c927", 00:16:59.408 "assigned_rate_limits": { 00:16:59.408 "rw_ios_per_sec": 0, 00:16:59.408 "rw_mbytes_per_sec": 0, 00:16:59.408 "r_mbytes_per_sec": 0, 00:16:59.408 "w_mbytes_per_sec": 0 00:16:59.408 }, 00:16:59.408 "claimed": false, 00:16:59.408 "zoned": false, 00:16:59.408 "supported_io_types": { 00:16:59.408 "read": true, 00:16:59.408 "write": true, 00:16:59.408 "unmap": true, 00:16:59.408 "flush": true, 00:16:59.408 "reset": true, 00:16:59.408 "nvme_admin": false, 00:16:59.408 "nvme_io": false, 00:16:59.408 "nvme_io_md": false, 00:16:59.408 "write_zeroes": true, 00:16:59.408 "zcopy": false, 00:16:59.408 "get_zone_info": false, 00:16:59.408 "zone_management": false, 00:16:59.408 "zone_append": false, 00:16:59.408 "compare": false, 00:16:59.408 "compare_and_write": false, 00:16:59.408 "abort": false, 00:16:59.408 "seek_hole": false, 00:16:59.408 "seek_data": false, 00:16:59.408 "copy": false, 00:16:59.408 "nvme_iov_md": false 00:16:59.408 }, 00:16:59.408 "memory_domains": [ 00:16:59.408 { 00:16:59.408 "dma_device_id": "system", 00:16:59.408 "dma_device_type": 1 00:16:59.408 }, 00:16:59.408 { 00:16:59.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.408 "dma_device_type": 2 00:16:59.408 }, 00:16:59.408 { 00:16:59.408 "dma_device_id": "system", 00:16:59.408 "dma_device_type": 1 00:16:59.408 }, 00:16:59.408 { 00:16:59.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.408 "dma_device_type": 2 00:16:59.408 }, 00:16:59.408 { 00:16:59.408 "dma_device_id": "system", 00:16:59.408 "dma_device_type": 1 00:16:59.408 }, 00:16:59.408 { 00:16:59.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.408 "dma_device_type": 2 00:16:59.408 } 00:16:59.408 ], 00:16:59.408 "driver_specific": { 00:16:59.408 "raid": { 00:16:59.408 "uuid": "3bc2e2a6-0925-4622-8c03-57974303c927", 00:16:59.408 "strip_size_kb": 64, 00:16:59.408 "state": "online", 00:16:59.408 "raid_level": "raid0", 00:16:59.408 "superblock": true, 00:16:59.408 "num_base_bdevs": 3, 00:16:59.408 "num_base_bdevs_discovered": 3, 00:16:59.408 "num_base_bdevs_operational": 3, 00:16:59.408 "base_bdevs_list": [ 00:16:59.408 { 00:16:59.408 "name": "BaseBdev1", 00:16:59.408 "uuid": "a0e05cb3-f37b-469a-a614-054f7bd84cc2", 00:16:59.408 "is_configured": true, 00:16:59.408 "data_offset": 2048, 00:16:59.408 "data_size": 63488 00:16:59.408 }, 00:16:59.408 { 00:16:59.408 "name": "BaseBdev2", 00:16:59.408 "uuid": "0ab4119d-f7cb-4877-beb8-4b74eae07341", 00:16:59.408 "is_configured": true, 00:16:59.408 "data_offset": 2048, 00:16:59.408 "data_size": 63488 00:16:59.408 }, 00:16:59.408 { 00:16:59.408 "name": "BaseBdev3", 00:16:59.408 "uuid": "17edc2bc-8d0e-460d-b07a-8abedfff8821", 00:16:59.408 "is_configured": true, 00:16:59.408 "data_offset": 2048, 00:16:59.408 "data_size": 63488 00:16:59.408 } 00:16:59.408 ] 00:16:59.408 } 00:16:59.408 } 00:16:59.408 }' 00:16:59.408 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.408 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:59.408 BaseBdev2 00:16:59.408 BaseBdev3' 00:16:59.408 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.408 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.408 15:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:59.666 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.666 "name": "BaseBdev1", 00:16:59.666 "aliases": [ 00:16:59.666 "a0e05cb3-f37b-469a-a614-054f7bd84cc2" 00:16:59.666 ], 00:16:59.666 "product_name": "Malloc disk", 00:16:59.666 "block_size": 512, 00:16:59.666 "num_blocks": 65536, 00:16:59.666 "uuid": "a0e05cb3-f37b-469a-a614-054f7bd84cc2", 00:16:59.666 "assigned_rate_limits": { 00:16:59.666 "rw_ios_per_sec": 0, 00:16:59.666 "rw_mbytes_per_sec": 0, 00:16:59.666 "r_mbytes_per_sec": 0, 00:16:59.666 "w_mbytes_per_sec": 0 00:16:59.666 }, 00:16:59.666 "claimed": true, 00:16:59.666 "claim_type": "exclusive_write", 00:16:59.666 "zoned": false, 00:16:59.666 "supported_io_types": { 00:16:59.666 "read": true, 00:16:59.666 "write": true, 00:16:59.666 "unmap": true, 00:16:59.666 "flush": true, 00:16:59.666 "reset": true, 00:16:59.666 "nvme_admin": false, 00:16:59.666 "nvme_io": false, 00:16:59.666 "nvme_io_md": false, 00:16:59.666 "write_zeroes": true, 00:16:59.666 "zcopy": true, 00:16:59.666 "get_zone_info": false, 00:16:59.666 "zone_management": false, 00:16:59.666 "zone_append": false, 00:16:59.666 "compare": false, 00:16:59.666 "compare_and_write": false, 00:16:59.666 "abort": true, 00:16:59.666 "seek_hole": false, 00:16:59.666 "seek_data": false, 00:16:59.666 "copy": true, 00:16:59.666 "nvme_iov_md": false 00:16:59.666 }, 00:16:59.666 "memory_domains": [ 00:16:59.666 { 00:16:59.666 "dma_device_id": "system", 00:16:59.666 "dma_device_type": 1 00:16:59.666 }, 00:16:59.666 { 00:16:59.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.666 "dma_device_type": 2 00:16:59.666 } 00:16:59.666 ], 00:16:59.666 "driver_specific": {} 00:16:59.666 }' 00:16:59.666 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.666 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:59.924 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.183 "name": "BaseBdev2", 00:17:00.183 "aliases": [ 00:17:00.183 "0ab4119d-f7cb-4877-beb8-4b74eae07341" 00:17:00.183 ], 00:17:00.183 "product_name": "Malloc disk", 00:17:00.183 "block_size": 512, 00:17:00.183 "num_blocks": 65536, 00:17:00.183 "uuid": "0ab4119d-f7cb-4877-beb8-4b74eae07341", 00:17:00.183 "assigned_rate_limits": { 00:17:00.183 "rw_ios_per_sec": 0, 00:17:00.183 "rw_mbytes_per_sec": 0, 00:17:00.183 "r_mbytes_per_sec": 0, 00:17:00.183 "w_mbytes_per_sec": 0 00:17:00.183 }, 00:17:00.183 "claimed": true, 00:17:00.183 "claim_type": "exclusive_write", 00:17:00.183 "zoned": false, 00:17:00.183 "supported_io_types": { 00:17:00.183 "read": true, 00:17:00.183 "write": true, 00:17:00.183 "unmap": true, 00:17:00.183 "flush": true, 00:17:00.183 "reset": true, 00:17:00.183 "nvme_admin": false, 00:17:00.183 "nvme_io": false, 00:17:00.183 "nvme_io_md": false, 00:17:00.183 "write_zeroes": true, 00:17:00.183 "zcopy": true, 00:17:00.183 "get_zone_info": false, 00:17:00.183 "zone_management": false, 00:17:00.183 "zone_append": false, 00:17:00.183 "compare": false, 00:17:00.183 "compare_and_write": false, 00:17:00.183 "abort": true, 00:17:00.183 "seek_hole": false, 00:17:00.183 "seek_data": false, 00:17:00.183 "copy": true, 00:17:00.183 "nvme_iov_md": false 00:17:00.183 }, 00:17:00.183 "memory_domains": [ 00:17:00.183 { 00:17:00.183 "dma_device_id": "system", 00:17:00.183 "dma_device_type": 1 00:17:00.183 }, 00:17:00.183 { 00:17:00.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.183 "dma_device_type": 2 00:17:00.183 } 00:17:00.183 ], 00:17:00.183 "driver_specific": {} 00:17:00.183 }' 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:00.183 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.442 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.442 "name": "BaseBdev3", 00:17:00.442 "aliases": [ 00:17:00.442 "17edc2bc-8d0e-460d-b07a-8abedfff8821" 00:17:00.442 ], 00:17:00.442 "product_name": "Malloc disk", 00:17:00.442 "block_size": 512, 00:17:00.442 "num_blocks": 65536, 00:17:00.442 "uuid": "17edc2bc-8d0e-460d-b07a-8abedfff8821", 00:17:00.442 "assigned_rate_limits": { 00:17:00.442 "rw_ios_per_sec": 0, 00:17:00.442 "rw_mbytes_per_sec": 0, 00:17:00.442 "r_mbytes_per_sec": 0, 00:17:00.442 "w_mbytes_per_sec": 0 00:17:00.442 }, 00:17:00.442 "claimed": true, 00:17:00.443 "claim_type": "exclusive_write", 00:17:00.443 "zoned": false, 00:17:00.443 "supported_io_types": { 00:17:00.443 "read": true, 00:17:00.443 "write": true, 00:17:00.443 "unmap": true, 00:17:00.443 "flush": true, 00:17:00.443 "reset": true, 00:17:00.443 "nvme_admin": false, 00:17:00.443 "nvme_io": false, 00:17:00.443 "nvme_io_md": false, 00:17:00.443 "write_zeroes": true, 00:17:00.443 "zcopy": true, 00:17:00.443 "get_zone_info": false, 00:17:00.443 "zone_management": false, 00:17:00.443 "zone_append": false, 00:17:00.443 "compare": false, 00:17:00.443 "compare_and_write": false, 00:17:00.443 "abort": true, 00:17:00.443 "seek_hole": false, 00:17:00.443 "seek_data": false, 00:17:00.443 "copy": true, 00:17:00.443 "nvme_iov_md": false 00:17:00.443 }, 00:17:00.443 "memory_domains": [ 00:17:00.443 { 00:17:00.443 "dma_device_id": "system", 00:17:00.443 "dma_device_type": 1 00:17:00.443 }, 00:17:00.443 { 00:17:00.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.443 "dma_device_type": 2 00:17:00.443 } 00:17:00.443 ], 00:17:00.443 "driver_specific": {} 00:17:00.443 }' 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.443 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:00.702 [2024-07-23 15:10:55.959160] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.702 [2024-07-23 15:10:55.959200] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.702 [2024-07-23 15:10:55.959295] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.702 15:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.960 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.960 "name": "Existed_Raid", 00:17:00.960 "uuid": "3bc2e2a6-0925-4622-8c03-57974303c927", 00:17:00.960 "strip_size_kb": 64, 00:17:00.960 "state": "offline", 00:17:00.960 "raid_level": "raid0", 00:17:00.960 "superblock": true, 00:17:00.960 "num_base_bdevs": 3, 00:17:00.960 "num_base_bdevs_discovered": 2, 00:17:00.960 "num_base_bdevs_operational": 2, 00:17:00.960 "base_bdevs_list": [ 00:17:00.960 { 00:17:00.960 "name": null, 00:17:00.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.960 "is_configured": false, 00:17:00.960 "data_offset": 2048, 00:17:00.960 "data_size": 63488 00:17:00.960 }, 00:17:00.960 { 00:17:00.960 "name": "BaseBdev2", 00:17:00.960 "uuid": "0ab4119d-f7cb-4877-beb8-4b74eae07341", 00:17:00.960 "is_configured": true, 00:17:00.960 "data_offset": 2048, 00:17:00.960 "data_size": 63488 00:17:00.960 }, 00:17:00.960 { 00:17:00.960 "name": "BaseBdev3", 00:17:00.960 "uuid": "17edc2bc-8d0e-460d-b07a-8abedfff8821", 00:17:00.960 "is_configured": true, 00:17:00.960 "data_offset": 2048, 00:17:00.960 "data_size": 63488 00:17:00.960 } 00:17:00.960 ] 00:17:00.960 }' 00:17:00.960 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.960 15:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.218 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:01.218 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:01.218 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.218 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:01.476 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:01.476 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:01.476 15:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:01.734 [2024-07-23 15:10:57.028034] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:01.734 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:01.734 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:01.734 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:01.734 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.991 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:01.991 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:01.991 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:02.249 [2024-07-23 15:10:57.516663] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:02.249 [2024-07-23 15:10:57.516743] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:17:02.249 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:02.249 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:02.249 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.249 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:02.507 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:02.507 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:02.507 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:02.507 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:02.507 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:02.507 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:02.765 BaseBdev2 00:17:02.765 15:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:02.765 15:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:02.765 15:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:02.765 15:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:02.765 15:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:02.765 15:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:02.765 15:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.024 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:03.024 [ 00:17:03.024 { 00:17:03.024 "name": "BaseBdev2", 00:17:03.024 "aliases": [ 00:17:03.024 "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8" 00:17:03.024 ], 00:17:03.024 "product_name": "Malloc disk", 00:17:03.024 "block_size": 512, 00:17:03.024 "num_blocks": 65536, 00:17:03.024 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:03.024 "assigned_rate_limits": { 00:17:03.024 "rw_ios_per_sec": 0, 00:17:03.024 "rw_mbytes_per_sec": 0, 00:17:03.024 "r_mbytes_per_sec": 0, 00:17:03.024 "w_mbytes_per_sec": 0 00:17:03.024 }, 00:17:03.024 "claimed": false, 00:17:03.024 "zoned": false, 00:17:03.024 "supported_io_types": { 00:17:03.024 "read": true, 00:17:03.024 "write": true, 00:17:03.024 "unmap": true, 00:17:03.024 "flush": true, 00:17:03.024 "reset": true, 00:17:03.024 "nvme_admin": false, 00:17:03.024 "nvme_io": false, 00:17:03.024 "nvme_io_md": false, 00:17:03.024 "write_zeroes": true, 00:17:03.024 "zcopy": true, 00:17:03.024 "get_zone_info": false, 00:17:03.024 "zone_management": false, 00:17:03.024 "zone_append": false, 00:17:03.024 "compare": false, 00:17:03.024 "compare_and_write": false, 00:17:03.024 "abort": true, 00:17:03.024 "seek_hole": false, 00:17:03.024 "seek_data": false, 00:17:03.024 "copy": true, 00:17:03.024 "nvme_iov_md": false 00:17:03.024 }, 00:17:03.024 "memory_domains": [ 00:17:03.024 { 00:17:03.024 "dma_device_id": "system", 00:17:03.024 "dma_device_type": 1 00:17:03.024 }, 00:17:03.024 { 00:17:03.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.024 "dma_device_type": 2 00:17:03.024 } 00:17:03.024 ], 00:17:03.024 "driver_specific": {} 00:17:03.024 } 00:17:03.024 ] 00:17:03.024 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:03.024 15:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:03.024 15:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:03.024 15:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:03.283 BaseBdev3 00:17:03.283 15:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:03.283 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:03.283 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:03.283 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:03.283 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:03.283 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:03.283 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.542 15:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:03.801 [ 00:17:03.801 { 00:17:03.801 "name": "BaseBdev3", 00:17:03.801 "aliases": [ 00:17:03.801 "1e2a0c94-25b6-4112-8c80-98dbc47da4b1" 00:17:03.801 ], 00:17:03.801 "product_name": "Malloc disk", 00:17:03.801 "block_size": 512, 00:17:03.801 "num_blocks": 65536, 00:17:03.801 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:03.801 "assigned_rate_limits": { 00:17:03.801 "rw_ios_per_sec": 0, 00:17:03.801 "rw_mbytes_per_sec": 0, 00:17:03.801 "r_mbytes_per_sec": 0, 00:17:03.801 "w_mbytes_per_sec": 0 00:17:03.801 }, 00:17:03.801 "claimed": false, 00:17:03.801 "zoned": false, 00:17:03.801 "supported_io_types": { 00:17:03.801 "read": true, 00:17:03.801 "write": true, 00:17:03.801 "unmap": true, 00:17:03.801 "flush": true, 00:17:03.801 "reset": true, 00:17:03.801 "nvme_admin": false, 00:17:03.801 "nvme_io": false, 00:17:03.801 "nvme_io_md": false, 00:17:03.801 "write_zeroes": true, 00:17:03.801 "zcopy": true, 00:17:03.801 "get_zone_info": false, 00:17:03.801 "zone_management": false, 00:17:03.801 "zone_append": false, 00:17:03.801 "compare": false, 00:17:03.801 "compare_and_write": false, 00:17:03.801 "abort": true, 00:17:03.801 "seek_hole": false, 00:17:03.801 "seek_data": false, 00:17:03.801 "copy": true, 00:17:03.801 "nvme_iov_md": false 00:17:03.801 }, 00:17:03.801 "memory_domains": [ 00:17:03.801 { 00:17:03.801 "dma_device_id": "system", 00:17:03.801 "dma_device_type": 1 00:17:03.801 }, 00:17:03.801 { 00:17:03.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.801 "dma_device_type": 2 00:17:03.801 } 00:17:03.801 ], 00:17:03.801 "driver_specific": {} 00:17:03.801 } 00:17:03.801 ] 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:03.801 [2024-07-23 15:10:59.165183] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.801 [2024-07-23 15:10:59.165447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.801 [2024-07-23 15:10:59.165505] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.801 [2024-07-23 15:10:59.167617] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.801 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.061 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.061 "name": "Existed_Raid", 00:17:04.061 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:04.061 "strip_size_kb": 64, 00:17:04.061 "state": "configuring", 00:17:04.061 "raid_level": "raid0", 00:17:04.061 "superblock": true, 00:17:04.061 "num_base_bdevs": 3, 00:17:04.061 "num_base_bdevs_discovered": 2, 00:17:04.061 "num_base_bdevs_operational": 3, 00:17:04.061 "base_bdevs_list": [ 00:17:04.061 { 00:17:04.061 "name": "BaseBdev1", 00:17:04.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.061 "is_configured": false, 00:17:04.061 "data_offset": 0, 00:17:04.061 "data_size": 0 00:17:04.061 }, 00:17:04.061 { 00:17:04.061 "name": "BaseBdev2", 00:17:04.061 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:04.061 "is_configured": true, 00:17:04.061 "data_offset": 2048, 00:17:04.061 "data_size": 63488 00:17:04.061 }, 00:17:04.061 { 00:17:04.061 "name": "BaseBdev3", 00:17:04.061 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:04.061 "is_configured": true, 00:17:04.061 "data_offset": 2048, 00:17:04.061 "data_size": 63488 00:17:04.061 } 00:17:04.061 ] 00:17:04.061 }' 00:17:04.061 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.061 15:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.320 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:04.580 [2024-07-23 15:10:59.853305] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.580 15:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.839 15:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.839 "name": "Existed_Raid", 00:17:04.839 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:04.839 "strip_size_kb": 64, 00:17:04.839 "state": "configuring", 00:17:04.839 "raid_level": "raid0", 00:17:04.839 "superblock": true, 00:17:04.839 "num_base_bdevs": 3, 00:17:04.839 "num_base_bdevs_discovered": 1, 00:17:04.839 "num_base_bdevs_operational": 3, 00:17:04.839 "base_bdevs_list": [ 00:17:04.839 { 00:17:04.839 "name": "BaseBdev1", 00:17:04.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.839 "is_configured": false, 00:17:04.839 "data_offset": 0, 00:17:04.839 "data_size": 0 00:17:04.839 }, 00:17:04.839 { 00:17:04.839 "name": null, 00:17:04.839 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:04.839 "is_configured": false, 00:17:04.839 "data_offset": 2048, 00:17:04.839 "data_size": 63488 00:17:04.839 }, 00:17:04.839 { 00:17:04.839 "name": "BaseBdev3", 00:17:04.839 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:04.839 "is_configured": true, 00:17:04.839 "data_offset": 2048, 00:17:04.839 "data_size": 63488 00:17:04.839 } 00:17:04.839 ] 00:17:04.839 }' 00:17:04.839 15:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.839 15:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.098 15:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.098 15:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.358 [2024-07-23 15:11:00.749050] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.358 BaseBdev1 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:05.358 15:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.637 15:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.897 [ 00:17:05.897 { 00:17:05.897 "name": "BaseBdev1", 00:17:05.897 "aliases": [ 00:17:05.897 "458723bb-61bb-43ab-ab98-2fc62b09b798" 00:17:05.897 ], 00:17:05.897 "product_name": "Malloc disk", 00:17:05.897 "block_size": 512, 00:17:05.897 "num_blocks": 65536, 00:17:05.897 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:05.897 "assigned_rate_limits": { 00:17:05.897 "rw_ios_per_sec": 0, 00:17:05.897 "rw_mbytes_per_sec": 0, 00:17:05.897 "r_mbytes_per_sec": 0, 00:17:05.897 "w_mbytes_per_sec": 0 00:17:05.897 }, 00:17:05.897 "claimed": true, 00:17:05.897 "claim_type": "exclusive_write", 00:17:05.897 "zoned": false, 00:17:05.897 "supported_io_types": { 00:17:05.897 "read": true, 00:17:05.897 "write": true, 00:17:05.897 "unmap": true, 00:17:05.897 "flush": true, 00:17:05.897 "reset": true, 00:17:05.897 "nvme_admin": false, 00:17:05.897 "nvme_io": false, 00:17:05.897 "nvme_io_md": false, 00:17:05.897 "write_zeroes": true, 00:17:05.897 "zcopy": true, 00:17:05.897 "get_zone_info": false, 00:17:05.897 "zone_management": false, 00:17:05.897 "zone_append": false, 00:17:05.897 "compare": false, 00:17:05.897 "compare_and_write": false, 00:17:05.897 "abort": true, 00:17:05.897 "seek_hole": false, 00:17:05.897 "seek_data": false, 00:17:05.897 "copy": true, 00:17:05.897 "nvme_iov_md": false 00:17:05.897 }, 00:17:05.897 "memory_domains": [ 00:17:05.897 { 00:17:05.897 "dma_device_id": "system", 00:17:05.897 "dma_device_type": 1 00:17:05.897 }, 00:17:05.897 { 00:17:05.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.897 "dma_device_type": 2 00:17:05.897 } 00:17:05.897 ], 00:17:05.897 "driver_specific": {} 00:17:05.897 } 00:17:05.897 ] 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.897 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.156 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.156 "name": "Existed_Raid", 00:17:06.156 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:06.156 "strip_size_kb": 64, 00:17:06.156 "state": "configuring", 00:17:06.156 "raid_level": "raid0", 00:17:06.156 "superblock": true, 00:17:06.156 "num_base_bdevs": 3, 00:17:06.156 "num_base_bdevs_discovered": 2, 00:17:06.156 "num_base_bdevs_operational": 3, 00:17:06.156 "base_bdevs_list": [ 00:17:06.156 { 00:17:06.156 "name": "BaseBdev1", 00:17:06.156 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:06.156 "is_configured": true, 00:17:06.156 "data_offset": 2048, 00:17:06.156 "data_size": 63488 00:17:06.156 }, 00:17:06.156 { 00:17:06.156 "name": null, 00:17:06.156 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:06.156 "is_configured": false, 00:17:06.156 "data_offset": 2048, 00:17:06.156 "data_size": 63488 00:17:06.156 }, 00:17:06.156 { 00:17:06.156 "name": "BaseBdev3", 00:17:06.156 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:06.156 "is_configured": true, 00:17:06.156 "data_offset": 2048, 00:17:06.156 "data_size": 63488 00:17:06.156 } 00:17:06.156 ] 00:17:06.156 }' 00:17:06.156 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.156 15:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.415 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.415 15:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:06.674 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:06.674 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:06.933 [2024-07-23 15:11:02.237527] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.933 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.193 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.193 "name": "Existed_Raid", 00:17:07.193 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:07.193 "strip_size_kb": 64, 00:17:07.193 "state": "configuring", 00:17:07.193 "raid_level": "raid0", 00:17:07.193 "superblock": true, 00:17:07.193 "num_base_bdevs": 3, 00:17:07.193 "num_base_bdevs_discovered": 1, 00:17:07.193 "num_base_bdevs_operational": 3, 00:17:07.193 "base_bdevs_list": [ 00:17:07.193 { 00:17:07.193 "name": "BaseBdev1", 00:17:07.193 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:07.193 "is_configured": true, 00:17:07.193 "data_offset": 2048, 00:17:07.193 "data_size": 63488 00:17:07.193 }, 00:17:07.193 { 00:17:07.193 "name": null, 00:17:07.193 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:07.193 "is_configured": false, 00:17:07.193 "data_offset": 2048, 00:17:07.193 "data_size": 63488 00:17:07.193 }, 00:17:07.193 { 00:17:07.193 "name": null, 00:17:07.193 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:07.193 "is_configured": false, 00:17:07.193 "data_offset": 2048, 00:17:07.193 "data_size": 63488 00:17:07.193 } 00:17:07.193 ] 00:17:07.193 }' 00:17:07.193 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.193 15:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.452 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.452 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.711 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:07.711 15:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:07.971 [2024-07-23 15:11:03.197769] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.971 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.228 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.228 "name": "Existed_Raid", 00:17:08.228 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:08.228 "strip_size_kb": 64, 00:17:08.228 "state": "configuring", 00:17:08.228 "raid_level": "raid0", 00:17:08.228 "superblock": true, 00:17:08.228 "num_base_bdevs": 3, 00:17:08.228 "num_base_bdevs_discovered": 2, 00:17:08.228 "num_base_bdevs_operational": 3, 00:17:08.228 "base_bdevs_list": [ 00:17:08.228 { 00:17:08.228 "name": "BaseBdev1", 00:17:08.228 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:08.228 "is_configured": true, 00:17:08.228 "data_offset": 2048, 00:17:08.228 "data_size": 63488 00:17:08.228 }, 00:17:08.228 { 00:17:08.228 "name": null, 00:17:08.228 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:08.228 "is_configured": false, 00:17:08.228 "data_offset": 2048, 00:17:08.228 "data_size": 63488 00:17:08.228 }, 00:17:08.228 { 00:17:08.228 "name": "BaseBdev3", 00:17:08.228 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:08.228 "is_configured": true, 00:17:08.228 "data_offset": 2048, 00:17:08.228 "data_size": 63488 00:17:08.228 } 00:17:08.228 ] 00:17:08.228 }' 00:17:08.228 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.228 15:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.486 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.486 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.745 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:08.745 15:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:08.745 [2024-07-23 15:11:04.134017] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.745 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.004 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.004 "name": "Existed_Raid", 00:17:09.004 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:09.004 "strip_size_kb": 64, 00:17:09.004 "state": "configuring", 00:17:09.004 "raid_level": "raid0", 00:17:09.004 "superblock": true, 00:17:09.004 "num_base_bdevs": 3, 00:17:09.004 "num_base_bdevs_discovered": 1, 00:17:09.004 "num_base_bdevs_operational": 3, 00:17:09.004 "base_bdevs_list": [ 00:17:09.004 { 00:17:09.004 "name": null, 00:17:09.004 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:09.004 "is_configured": false, 00:17:09.004 "data_offset": 2048, 00:17:09.004 "data_size": 63488 00:17:09.004 }, 00:17:09.004 { 00:17:09.004 "name": null, 00:17:09.004 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:09.004 "is_configured": false, 00:17:09.004 "data_offset": 2048, 00:17:09.004 "data_size": 63488 00:17:09.004 }, 00:17:09.004 { 00:17:09.004 "name": "BaseBdev3", 00:17:09.004 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:09.004 "is_configured": true, 00:17:09.004 "data_offset": 2048, 00:17:09.004 "data_size": 63488 00:17:09.004 } 00:17:09.004 ] 00:17:09.004 }' 00:17:09.004 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.004 15:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.572 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.572 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:09.572 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:09.572 15:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:09.831 [2024-07-23 15:11:05.194865] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.831 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.089 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.089 "name": "Existed_Raid", 00:17:10.089 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:10.089 "strip_size_kb": 64, 00:17:10.089 "state": "configuring", 00:17:10.089 "raid_level": "raid0", 00:17:10.089 "superblock": true, 00:17:10.089 "num_base_bdevs": 3, 00:17:10.089 "num_base_bdevs_discovered": 2, 00:17:10.089 "num_base_bdevs_operational": 3, 00:17:10.089 "base_bdevs_list": [ 00:17:10.089 { 00:17:10.089 "name": null, 00:17:10.089 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:10.089 "is_configured": false, 00:17:10.089 "data_offset": 2048, 00:17:10.089 "data_size": 63488 00:17:10.089 }, 00:17:10.089 { 00:17:10.089 "name": "BaseBdev2", 00:17:10.089 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:10.089 "is_configured": true, 00:17:10.089 "data_offset": 2048, 00:17:10.089 "data_size": 63488 00:17:10.090 }, 00:17:10.090 { 00:17:10.090 "name": "BaseBdev3", 00:17:10.090 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:10.090 "is_configured": true, 00:17:10.090 "data_offset": 2048, 00:17:10.090 "data_size": 63488 00:17:10.090 } 00:17:10.090 ] 00:17:10.090 }' 00:17:10.090 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.090 15:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.347 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.347 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:10.605 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:10.605 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:10.605 15:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.865 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 458723bb-61bb-43ab-ab98-2fc62b09b798 00:17:11.122 [2024-07-23 15:11:06.358631] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:11.122 [2024-07-23 15:11:06.358832] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:17:11.122 [2024-07-23 15:11:06.358853] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:11.122 [2024-07-23 15:11:06.358947] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:17:11.122 [2024-07-23 15:11:06.359241] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:17:11.122 [2024-07-23 15:11:06.359254] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007880 00:17:11.122 [2024-07-23 15:11:06.359350] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.122 NewBaseBdev 00:17:11.122 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:11.122 15:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:11.122 15:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:11.122 15:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:11.122 15:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:11.122 15:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:11.122 15:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.380 15:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:11.639 [ 00:17:11.639 { 00:17:11.639 "name": "NewBaseBdev", 00:17:11.639 "aliases": [ 00:17:11.639 "458723bb-61bb-43ab-ab98-2fc62b09b798" 00:17:11.639 ], 00:17:11.639 "product_name": "Malloc disk", 00:17:11.639 "block_size": 512, 00:17:11.639 "num_blocks": 65536, 00:17:11.639 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:11.639 "assigned_rate_limits": { 00:17:11.639 "rw_ios_per_sec": 0, 00:17:11.639 "rw_mbytes_per_sec": 0, 00:17:11.639 "r_mbytes_per_sec": 0, 00:17:11.639 "w_mbytes_per_sec": 0 00:17:11.639 }, 00:17:11.639 "claimed": true, 00:17:11.639 "claim_type": "exclusive_write", 00:17:11.639 "zoned": false, 00:17:11.639 "supported_io_types": { 00:17:11.639 "read": true, 00:17:11.639 "write": true, 00:17:11.639 "unmap": true, 00:17:11.639 "flush": true, 00:17:11.639 "reset": true, 00:17:11.639 "nvme_admin": false, 00:17:11.639 "nvme_io": false, 00:17:11.639 "nvme_io_md": false, 00:17:11.639 "write_zeroes": true, 00:17:11.639 "zcopy": true, 00:17:11.639 "get_zone_info": false, 00:17:11.639 "zone_management": false, 00:17:11.639 "zone_append": false, 00:17:11.639 "compare": false, 00:17:11.639 "compare_and_write": false, 00:17:11.639 "abort": true, 00:17:11.639 "seek_hole": false, 00:17:11.639 "seek_data": false, 00:17:11.639 "copy": true, 00:17:11.639 "nvme_iov_md": false 00:17:11.639 }, 00:17:11.639 "memory_domains": [ 00:17:11.639 { 00:17:11.639 "dma_device_id": "system", 00:17:11.639 "dma_device_type": 1 00:17:11.639 }, 00:17:11.639 { 00:17:11.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.639 "dma_device_type": 2 00:17:11.639 } 00:17:11.639 ], 00:17:11.639 "driver_specific": {} 00:17:11.639 } 00:17:11.639 ] 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.639 15:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.898 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.898 "name": "Existed_Raid", 00:17:11.898 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:11.898 "strip_size_kb": 64, 00:17:11.898 "state": "online", 00:17:11.898 "raid_level": "raid0", 00:17:11.898 "superblock": true, 00:17:11.898 "num_base_bdevs": 3, 00:17:11.898 "num_base_bdevs_discovered": 3, 00:17:11.898 "num_base_bdevs_operational": 3, 00:17:11.898 "base_bdevs_list": [ 00:17:11.898 { 00:17:11.898 "name": "NewBaseBdev", 00:17:11.898 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:11.898 "is_configured": true, 00:17:11.898 "data_offset": 2048, 00:17:11.898 "data_size": 63488 00:17:11.898 }, 00:17:11.898 { 00:17:11.898 "name": "BaseBdev2", 00:17:11.898 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:11.898 "is_configured": true, 00:17:11.898 "data_offset": 2048, 00:17:11.898 "data_size": 63488 00:17:11.898 }, 00:17:11.898 { 00:17:11.898 "name": "BaseBdev3", 00:17:11.898 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:11.898 "is_configured": true, 00:17:11.898 "data_offset": 2048, 00:17:11.898 "data_size": 63488 00:17:11.898 } 00:17:11.898 ] 00:17:11.898 }' 00:17:11.898 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.898 15:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:12.190 [2024-07-23 15:11:07.527315] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.190 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:12.190 "name": "Existed_Raid", 00:17:12.190 "aliases": [ 00:17:12.190 "bb610af0-f81c-4632-b0e2-287d52fa54c8" 00:17:12.190 ], 00:17:12.190 "product_name": "Raid Volume", 00:17:12.190 "block_size": 512, 00:17:12.190 "num_blocks": 190464, 00:17:12.190 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:12.190 "assigned_rate_limits": { 00:17:12.190 "rw_ios_per_sec": 0, 00:17:12.190 "rw_mbytes_per_sec": 0, 00:17:12.191 "r_mbytes_per_sec": 0, 00:17:12.191 "w_mbytes_per_sec": 0 00:17:12.191 }, 00:17:12.191 "claimed": false, 00:17:12.191 "zoned": false, 00:17:12.191 "supported_io_types": { 00:17:12.191 "read": true, 00:17:12.191 "write": true, 00:17:12.191 "unmap": true, 00:17:12.191 "flush": true, 00:17:12.191 "reset": true, 00:17:12.191 "nvme_admin": false, 00:17:12.191 "nvme_io": false, 00:17:12.191 "nvme_io_md": false, 00:17:12.191 "write_zeroes": true, 00:17:12.191 "zcopy": false, 00:17:12.191 "get_zone_info": false, 00:17:12.191 "zone_management": false, 00:17:12.191 "zone_append": false, 00:17:12.191 "compare": false, 00:17:12.191 "compare_and_write": false, 00:17:12.191 "abort": false, 00:17:12.191 "seek_hole": false, 00:17:12.191 "seek_data": false, 00:17:12.191 "copy": false, 00:17:12.191 "nvme_iov_md": false 00:17:12.191 }, 00:17:12.191 "memory_domains": [ 00:17:12.191 { 00:17:12.191 "dma_device_id": "system", 00:17:12.191 "dma_device_type": 1 00:17:12.191 }, 00:17:12.191 { 00:17:12.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.191 "dma_device_type": 2 00:17:12.191 }, 00:17:12.191 { 00:17:12.191 "dma_device_id": "system", 00:17:12.191 "dma_device_type": 1 00:17:12.191 }, 00:17:12.191 { 00:17:12.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.191 "dma_device_type": 2 00:17:12.191 }, 00:17:12.191 { 00:17:12.191 "dma_device_id": "system", 00:17:12.191 "dma_device_type": 1 00:17:12.191 }, 00:17:12.191 { 00:17:12.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.191 "dma_device_type": 2 00:17:12.191 } 00:17:12.191 ], 00:17:12.191 "driver_specific": { 00:17:12.191 "raid": { 00:17:12.191 "uuid": "bb610af0-f81c-4632-b0e2-287d52fa54c8", 00:17:12.191 "strip_size_kb": 64, 00:17:12.191 "state": "online", 00:17:12.191 "raid_level": "raid0", 00:17:12.191 "superblock": true, 00:17:12.191 "num_base_bdevs": 3, 00:17:12.191 "num_base_bdevs_discovered": 3, 00:17:12.191 "num_base_bdevs_operational": 3, 00:17:12.191 "base_bdevs_list": [ 00:17:12.191 { 00:17:12.191 "name": "NewBaseBdev", 00:17:12.191 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:12.191 "is_configured": true, 00:17:12.191 "data_offset": 2048, 00:17:12.191 "data_size": 63488 00:17:12.191 }, 00:17:12.191 { 00:17:12.191 "name": "BaseBdev2", 00:17:12.191 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:12.191 "is_configured": true, 00:17:12.191 "data_offset": 2048, 00:17:12.191 "data_size": 63488 00:17:12.191 }, 00:17:12.191 { 00:17:12.191 "name": "BaseBdev3", 00:17:12.191 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:12.191 "is_configured": true, 00:17:12.191 "data_offset": 2048, 00:17:12.191 "data_size": 63488 00:17:12.191 } 00:17:12.191 ] 00:17:12.191 } 00:17:12.191 } 00:17:12.191 }' 00:17:12.191 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.191 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:12.191 BaseBdev2 00:17:12.191 BaseBdev3' 00:17:12.191 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.191 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.191 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:12.449 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.449 "name": "NewBaseBdev", 00:17:12.449 "aliases": [ 00:17:12.449 "458723bb-61bb-43ab-ab98-2fc62b09b798" 00:17:12.449 ], 00:17:12.449 "product_name": "Malloc disk", 00:17:12.449 "block_size": 512, 00:17:12.449 "num_blocks": 65536, 00:17:12.449 "uuid": "458723bb-61bb-43ab-ab98-2fc62b09b798", 00:17:12.449 "assigned_rate_limits": { 00:17:12.449 "rw_ios_per_sec": 0, 00:17:12.449 "rw_mbytes_per_sec": 0, 00:17:12.449 "r_mbytes_per_sec": 0, 00:17:12.449 "w_mbytes_per_sec": 0 00:17:12.449 }, 00:17:12.449 "claimed": true, 00:17:12.449 "claim_type": "exclusive_write", 00:17:12.449 "zoned": false, 00:17:12.449 "supported_io_types": { 00:17:12.449 "read": true, 00:17:12.449 "write": true, 00:17:12.449 "unmap": true, 00:17:12.449 "flush": true, 00:17:12.449 "reset": true, 00:17:12.449 "nvme_admin": false, 00:17:12.449 "nvme_io": false, 00:17:12.449 "nvme_io_md": false, 00:17:12.449 "write_zeroes": true, 00:17:12.449 "zcopy": true, 00:17:12.449 "get_zone_info": false, 00:17:12.449 "zone_management": false, 00:17:12.449 "zone_append": false, 00:17:12.449 "compare": false, 00:17:12.449 "compare_and_write": false, 00:17:12.449 "abort": true, 00:17:12.449 "seek_hole": false, 00:17:12.449 "seek_data": false, 00:17:12.449 "copy": true, 00:17:12.449 "nvme_iov_md": false 00:17:12.449 }, 00:17:12.449 "memory_domains": [ 00:17:12.449 { 00:17:12.449 "dma_device_id": "system", 00:17:12.449 "dma_device_type": 1 00:17:12.449 }, 00:17:12.449 { 00:17:12.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.449 "dma_device_type": 2 00:17:12.449 } 00:17:12.449 ], 00:17:12.449 "driver_specific": {} 00:17:12.449 }' 00:17:12.449 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.449 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.449 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.449 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.449 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:12.708 15:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.708 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.708 "name": "BaseBdev2", 00:17:12.708 "aliases": [ 00:17:12.708 "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8" 00:17:12.708 ], 00:17:12.708 "product_name": "Malloc disk", 00:17:12.708 "block_size": 512, 00:17:12.708 "num_blocks": 65536, 00:17:12.708 "uuid": "6fa421a1-28c0-4d3b-afbd-3bb6ef2baeb8", 00:17:12.708 "assigned_rate_limits": { 00:17:12.708 "rw_ios_per_sec": 0, 00:17:12.708 "rw_mbytes_per_sec": 0, 00:17:12.708 "r_mbytes_per_sec": 0, 00:17:12.708 "w_mbytes_per_sec": 0 00:17:12.708 }, 00:17:12.708 "claimed": true, 00:17:12.708 "claim_type": "exclusive_write", 00:17:12.708 "zoned": false, 00:17:12.708 "supported_io_types": { 00:17:12.708 "read": true, 00:17:12.708 "write": true, 00:17:12.708 "unmap": true, 00:17:12.708 "flush": true, 00:17:12.708 "reset": true, 00:17:12.708 "nvme_admin": false, 00:17:12.708 "nvme_io": false, 00:17:12.708 "nvme_io_md": false, 00:17:12.708 "write_zeroes": true, 00:17:12.708 "zcopy": true, 00:17:12.708 "get_zone_info": false, 00:17:12.708 "zone_management": false, 00:17:12.708 "zone_append": false, 00:17:12.708 "compare": false, 00:17:12.708 "compare_and_write": false, 00:17:12.708 "abort": true, 00:17:12.708 "seek_hole": false, 00:17:12.708 "seek_data": false, 00:17:12.708 "copy": true, 00:17:12.708 "nvme_iov_md": false 00:17:12.708 }, 00:17:12.708 "memory_domains": [ 00:17:12.708 { 00:17:12.708 "dma_device_id": "system", 00:17:12.708 "dma_device_type": 1 00:17:12.708 }, 00:17:12.708 { 00:17:12.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.708 "dma_device_type": 2 00:17:12.708 } 00:17:12.708 ], 00:17:12.708 "driver_specific": {} 00:17:12.708 }' 00:17:12.708 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.708 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.967 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.967 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.967 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.967 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.968 "name": "BaseBdev3", 00:17:12.968 "aliases": [ 00:17:12.968 "1e2a0c94-25b6-4112-8c80-98dbc47da4b1" 00:17:12.968 ], 00:17:12.968 "product_name": "Malloc disk", 00:17:12.968 "block_size": 512, 00:17:12.968 "num_blocks": 65536, 00:17:12.968 "uuid": "1e2a0c94-25b6-4112-8c80-98dbc47da4b1", 00:17:12.968 "assigned_rate_limits": { 00:17:12.968 "rw_ios_per_sec": 0, 00:17:12.968 "rw_mbytes_per_sec": 0, 00:17:12.968 "r_mbytes_per_sec": 0, 00:17:12.968 "w_mbytes_per_sec": 0 00:17:12.968 }, 00:17:12.968 "claimed": true, 00:17:12.968 "claim_type": "exclusive_write", 00:17:12.968 "zoned": false, 00:17:12.968 "supported_io_types": { 00:17:12.968 "read": true, 00:17:12.968 "write": true, 00:17:12.968 "unmap": true, 00:17:12.968 "flush": true, 00:17:12.968 "reset": true, 00:17:12.968 "nvme_admin": false, 00:17:12.968 "nvme_io": false, 00:17:12.968 "nvme_io_md": false, 00:17:12.968 "write_zeroes": true, 00:17:12.968 "zcopy": true, 00:17:12.968 "get_zone_info": false, 00:17:12.968 "zone_management": false, 00:17:12.968 "zone_append": false, 00:17:12.968 "compare": false, 00:17:12.968 "compare_and_write": false, 00:17:12.968 "abort": true, 00:17:12.968 "seek_hole": false, 00:17:12.968 "seek_data": false, 00:17:12.968 "copy": true, 00:17:12.968 "nvme_iov_md": false 00:17:12.968 }, 00:17:12.968 "memory_domains": [ 00:17:12.968 { 00:17:12.968 "dma_device_id": "system", 00:17:12.968 "dma_device_type": 1 00:17:12.968 }, 00:17:12.968 { 00:17:12.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.968 "dma_device_type": 2 00:17:12.968 } 00:17:12.968 ], 00:17:12.968 "driver_specific": {} 00:17:12.968 }' 00:17:12.968 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.227 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:13.485 [2024-07-23 15:11:08.723315] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.485 [2024-07-23 15:11:08.723537] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.485 [2024-07-23 15:11:08.723633] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.485 [2024-07-23 15:11:08.723690] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.485 [2024-07-23 15:11:08.723715] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name Existed_Raid, state offline 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 92174 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 92174 ']' 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 92174 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92174 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.485 killing process with pid 92174 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92174' 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 92174 00:17:13.485 [2024-07-23 15:11:08.789099] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.485 15:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 92174 00:17:13.485 [2024-07-23 15:11:08.825179] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.743 15:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:13.743 00:17:13.743 real 0m20.947s 00:17:13.743 user 0m36.525s 00:17:13.743 sys 0m4.580s 00:17:13.743 ************************************ 00:17:13.743 END TEST raid_state_function_test_sb 00:17:13.743 ************************************ 00:17:13.743 15:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.743 15:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.743 15:11:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:13.743 15:11:09 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:13.743 15:11:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:13.743 15:11:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.743 15:11:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.743 ************************************ 00:17:13.743 START TEST raid_superblock_test 00:17:13.743 ************************************ 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:13.743 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=93033 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 93033 /var/tmp/spdk-raid.sock 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 93033 ']' 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.744 15:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.002 [2024-07-23 15:11:09.207253] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:17:14.002 [2024-07-23 15:11:09.207475] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93033 ] 00:17:14.002 [2024-07-23 15:11:09.356627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.002 [2024-07-23 15:11:09.405050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.261 [2024-07-23 15:11:09.450811] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:14.829 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:15.087 malloc1 00:17:15.088 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:15.346 [2024-07-23 15:11:10.623295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:15.346 [2024-07-23 15:11:10.623574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.346 [2024-07-23 15:11:10.623639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:17:15.346 [2024-07-23 15:11:10.623735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.346 [2024-07-23 15:11:10.626415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.346 [2024-07-23 15:11:10.626577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:15.346 pt1 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:15.346 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:15.605 malloc2 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.605 [2024-07-23 15:11:10.981092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.605 [2024-07-23 15:11:10.981373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.605 [2024-07-23 15:11:10.981406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:17:15.605 [2024-07-23 15:11:10.981426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.605 [2024-07-23 15:11:10.983961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.605 [2024-07-23 15:11:10.984005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.605 pt2 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:15.605 15:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:15.864 malloc3 00:17:15.864 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.124 [2024-07-23 15:11:11.342750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.124 [2024-07-23 15:11:11.342976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.124 [2024-07-23 15:11:11.343013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:17:16.124 [2024-07-23 15:11:11.343028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.124 [2024-07-23 15:11:11.345489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.124 [2024-07-23 15:11:11.345540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.124 pt3 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:16.124 [2024-07-23 15:11:11.526848] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:16.124 [2024-07-23 15:11:11.529063] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.124 [2024-07-23 15:11:11.529132] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.124 [2024-07-23 15:11:11.529346] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:17:16.124 [2024-07-23 15:11:11.529360] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:16.124 [2024-07-23 15:11:11.529490] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:17:16.124 [2024-07-23 15:11:11.529851] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:17:16.124 [2024-07-23 15:11:11.529869] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007880 00:17:16.124 [2024-07-23 15:11:11.529992] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.124 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.384 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.384 "name": "raid_bdev1", 00:17:16.384 "uuid": "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5", 00:17:16.384 "strip_size_kb": 64, 00:17:16.384 "state": "online", 00:17:16.384 "raid_level": "raid0", 00:17:16.384 "superblock": true, 00:17:16.384 "num_base_bdevs": 3, 00:17:16.384 "num_base_bdevs_discovered": 3, 00:17:16.384 "num_base_bdevs_operational": 3, 00:17:16.384 "base_bdevs_list": [ 00:17:16.384 { 00:17:16.384 "name": "pt1", 00:17:16.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.384 "is_configured": true, 00:17:16.384 "data_offset": 2048, 00:17:16.384 "data_size": 63488 00:17:16.384 }, 00:17:16.384 { 00:17:16.384 "name": "pt2", 00:17:16.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.384 "is_configured": true, 00:17:16.384 "data_offset": 2048, 00:17:16.384 "data_size": 63488 00:17:16.384 }, 00:17:16.384 { 00:17:16.384 "name": "pt3", 00:17:16.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.384 "is_configured": true, 00:17:16.384 "data_offset": 2048, 00:17:16.384 "data_size": 63488 00:17:16.384 } 00:17:16.384 ] 00:17:16.384 }' 00:17:16.384 15:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.384 15:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.643 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:16.643 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:16.643 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:16.643 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:16.643 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:16.643 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:16.643 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:16.643 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:16.901 [2024-07-23 15:11:12.311242] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.160 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:17.160 "name": "raid_bdev1", 00:17:17.160 "aliases": [ 00:17:17.160 "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5" 00:17:17.160 ], 00:17:17.160 "product_name": "Raid Volume", 00:17:17.160 "block_size": 512, 00:17:17.160 "num_blocks": 190464, 00:17:17.160 "uuid": "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5", 00:17:17.160 "assigned_rate_limits": { 00:17:17.160 "rw_ios_per_sec": 0, 00:17:17.160 "rw_mbytes_per_sec": 0, 00:17:17.160 "r_mbytes_per_sec": 0, 00:17:17.160 "w_mbytes_per_sec": 0 00:17:17.160 }, 00:17:17.160 "claimed": false, 00:17:17.160 "zoned": false, 00:17:17.160 "supported_io_types": { 00:17:17.160 "read": true, 00:17:17.160 "write": true, 00:17:17.160 "unmap": true, 00:17:17.160 "flush": true, 00:17:17.160 "reset": true, 00:17:17.160 "nvme_admin": false, 00:17:17.160 "nvme_io": false, 00:17:17.160 "nvme_io_md": false, 00:17:17.160 "write_zeroes": true, 00:17:17.160 "zcopy": false, 00:17:17.160 "get_zone_info": false, 00:17:17.160 "zone_management": false, 00:17:17.160 "zone_append": false, 00:17:17.160 "compare": false, 00:17:17.160 "compare_and_write": false, 00:17:17.160 "abort": false, 00:17:17.160 "seek_hole": false, 00:17:17.160 "seek_data": false, 00:17:17.160 "copy": false, 00:17:17.160 "nvme_iov_md": false 00:17:17.160 }, 00:17:17.160 "memory_domains": [ 00:17:17.160 { 00:17:17.160 "dma_device_id": "system", 00:17:17.160 "dma_device_type": 1 00:17:17.160 }, 00:17:17.160 { 00:17:17.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.160 "dma_device_type": 2 00:17:17.160 }, 00:17:17.160 { 00:17:17.160 "dma_device_id": "system", 00:17:17.160 "dma_device_type": 1 00:17:17.160 }, 00:17:17.160 { 00:17:17.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.160 "dma_device_type": 2 00:17:17.160 }, 00:17:17.160 { 00:17:17.160 "dma_device_id": "system", 00:17:17.160 "dma_device_type": 1 00:17:17.160 }, 00:17:17.160 { 00:17:17.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.160 "dma_device_type": 2 00:17:17.160 } 00:17:17.160 ], 00:17:17.160 "driver_specific": { 00:17:17.160 "raid": { 00:17:17.160 "uuid": "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5", 00:17:17.160 "strip_size_kb": 64, 00:17:17.160 "state": "online", 00:17:17.160 "raid_level": "raid0", 00:17:17.160 "superblock": true, 00:17:17.160 "num_base_bdevs": 3, 00:17:17.160 "num_base_bdevs_discovered": 3, 00:17:17.160 "num_base_bdevs_operational": 3, 00:17:17.160 "base_bdevs_list": [ 00:17:17.160 { 00:17:17.160 "name": "pt1", 00:17:17.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.160 "is_configured": true, 00:17:17.160 "data_offset": 2048, 00:17:17.160 "data_size": 63488 00:17:17.160 }, 00:17:17.160 { 00:17:17.160 "name": "pt2", 00:17:17.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.160 "is_configured": true, 00:17:17.160 "data_offset": 2048, 00:17:17.160 "data_size": 63488 00:17:17.160 }, 00:17:17.160 { 00:17:17.160 "name": "pt3", 00:17:17.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.160 "is_configured": true, 00:17:17.160 "data_offset": 2048, 00:17:17.160 "data_size": 63488 00:17:17.161 } 00:17:17.161 ] 00:17:17.161 } 00:17:17.161 } 00:17:17.161 }' 00:17:17.161 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.161 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:17.161 pt2 00:17:17.161 pt3' 00:17:17.161 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.161 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.161 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:17.419 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.419 "name": "pt1", 00:17:17.419 "aliases": [ 00:17:17.419 "00000000-0000-0000-0000-000000000001" 00:17:17.419 ], 00:17:17.419 "product_name": "passthru", 00:17:17.419 "block_size": 512, 00:17:17.419 "num_blocks": 65536, 00:17:17.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.419 "assigned_rate_limits": { 00:17:17.419 "rw_ios_per_sec": 0, 00:17:17.419 "rw_mbytes_per_sec": 0, 00:17:17.419 "r_mbytes_per_sec": 0, 00:17:17.419 "w_mbytes_per_sec": 0 00:17:17.419 }, 00:17:17.419 "claimed": true, 00:17:17.419 "claim_type": "exclusive_write", 00:17:17.419 "zoned": false, 00:17:17.419 "supported_io_types": { 00:17:17.419 "read": true, 00:17:17.419 "write": true, 00:17:17.419 "unmap": true, 00:17:17.419 "flush": true, 00:17:17.419 "reset": true, 00:17:17.419 "nvme_admin": false, 00:17:17.419 "nvme_io": false, 00:17:17.419 "nvme_io_md": false, 00:17:17.419 "write_zeroes": true, 00:17:17.419 "zcopy": true, 00:17:17.419 "get_zone_info": false, 00:17:17.419 "zone_management": false, 00:17:17.419 "zone_append": false, 00:17:17.419 "compare": false, 00:17:17.419 "compare_and_write": false, 00:17:17.419 "abort": true, 00:17:17.419 "seek_hole": false, 00:17:17.419 "seek_data": false, 00:17:17.419 "copy": true, 00:17:17.419 "nvme_iov_md": false 00:17:17.419 }, 00:17:17.419 "memory_domains": [ 00:17:17.419 { 00:17:17.419 "dma_device_id": "system", 00:17:17.419 "dma_device_type": 1 00:17:17.419 }, 00:17:17.419 { 00:17:17.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.419 "dma_device_type": 2 00:17:17.419 } 00:17:17.419 ], 00:17:17.419 "driver_specific": { 00:17:17.419 "passthru": { 00:17:17.419 "name": "pt1", 00:17:17.419 "base_bdev_name": "malloc1" 00:17:17.419 } 00:17:17.419 } 00:17:17.419 }' 00:17:17.419 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.419 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.419 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.419 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.419 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.419 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.419 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.420 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.420 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.420 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.420 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.420 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.420 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.420 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:17.420 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.679 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.679 "name": "pt2", 00:17:17.679 "aliases": [ 00:17:17.679 "00000000-0000-0000-0000-000000000002" 00:17:17.679 ], 00:17:17.679 "product_name": "passthru", 00:17:17.679 "block_size": 512, 00:17:17.679 "num_blocks": 65536, 00:17:17.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.679 "assigned_rate_limits": { 00:17:17.679 "rw_ios_per_sec": 0, 00:17:17.679 "rw_mbytes_per_sec": 0, 00:17:17.679 "r_mbytes_per_sec": 0, 00:17:17.679 "w_mbytes_per_sec": 0 00:17:17.679 }, 00:17:17.679 "claimed": true, 00:17:17.679 "claim_type": "exclusive_write", 00:17:17.679 "zoned": false, 00:17:17.679 "supported_io_types": { 00:17:17.679 "read": true, 00:17:17.679 "write": true, 00:17:17.679 "unmap": true, 00:17:17.679 "flush": true, 00:17:17.679 "reset": true, 00:17:17.679 "nvme_admin": false, 00:17:17.679 "nvme_io": false, 00:17:17.679 "nvme_io_md": false, 00:17:17.679 "write_zeroes": true, 00:17:17.679 "zcopy": true, 00:17:17.679 "get_zone_info": false, 00:17:17.679 "zone_management": false, 00:17:17.679 "zone_append": false, 00:17:17.679 "compare": false, 00:17:17.679 "compare_and_write": false, 00:17:17.679 "abort": true, 00:17:17.679 "seek_hole": false, 00:17:17.679 "seek_data": false, 00:17:17.679 "copy": true, 00:17:17.679 "nvme_iov_md": false 00:17:17.679 }, 00:17:17.679 "memory_domains": [ 00:17:17.679 { 00:17:17.679 "dma_device_id": "system", 00:17:17.679 "dma_device_type": 1 00:17:17.679 }, 00:17:17.679 { 00:17:17.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.679 "dma_device_type": 2 00:17:17.679 } 00:17:17.679 ], 00:17:17.679 "driver_specific": { 00:17:17.679 "passthru": { 00:17:17.679 "name": "pt2", 00:17:17.679 "base_bdev_name": "malloc2" 00:17:17.679 } 00:17:17.679 } 00:17:17.679 }' 00:17:17.679 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.679 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.679 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.679 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.679 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.679 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.679 15:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.679 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.679 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.679 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.679 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.679 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.679 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.679 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.679 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:17.938 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.938 "name": "pt3", 00:17:17.938 "aliases": [ 00:17:17.938 "00000000-0000-0000-0000-000000000003" 00:17:17.938 ], 00:17:17.939 "product_name": "passthru", 00:17:17.939 "block_size": 512, 00:17:17.939 "num_blocks": 65536, 00:17:17.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.939 "assigned_rate_limits": { 00:17:17.939 "rw_ios_per_sec": 0, 00:17:17.939 "rw_mbytes_per_sec": 0, 00:17:17.939 "r_mbytes_per_sec": 0, 00:17:17.939 "w_mbytes_per_sec": 0 00:17:17.939 }, 00:17:17.939 "claimed": true, 00:17:17.939 "claim_type": "exclusive_write", 00:17:17.939 "zoned": false, 00:17:17.939 "supported_io_types": { 00:17:17.939 "read": true, 00:17:17.939 "write": true, 00:17:17.939 "unmap": true, 00:17:17.939 "flush": true, 00:17:17.939 "reset": true, 00:17:17.939 "nvme_admin": false, 00:17:17.939 "nvme_io": false, 00:17:17.939 "nvme_io_md": false, 00:17:17.939 "write_zeroes": true, 00:17:17.939 "zcopy": true, 00:17:17.939 "get_zone_info": false, 00:17:17.939 "zone_management": false, 00:17:17.939 "zone_append": false, 00:17:17.939 "compare": false, 00:17:17.939 "compare_and_write": false, 00:17:17.939 "abort": true, 00:17:17.939 "seek_hole": false, 00:17:17.939 "seek_data": false, 00:17:17.939 "copy": true, 00:17:17.939 "nvme_iov_md": false 00:17:17.939 }, 00:17:17.939 "memory_domains": [ 00:17:17.939 { 00:17:17.939 "dma_device_id": "system", 00:17:17.939 "dma_device_type": 1 00:17:17.939 }, 00:17:17.939 { 00:17:17.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.939 "dma_device_type": 2 00:17:17.939 } 00:17:17.939 ], 00:17:17.939 "driver_specific": { 00:17:17.939 "passthru": { 00:17:17.939 "name": "pt3", 00:17:17.939 "base_bdev_name": "malloc3" 00:17:17.939 } 00:17:17.939 } 00:17:17.939 }' 00:17:17.939 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.939 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.939 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.939 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.939 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.939 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.939 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.197 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:18.198 [2024-07-23 15:11:13.555533] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b7c7d6c6-01f1-4b40-a15e-adad202ed1a5 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z b7c7d6c6-01f1-4b40-a15e-adad202ed1a5 ']' 00:17:18.198 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:18.456 [2024-07-23 15:11:13.779297] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.456 [2024-07-23 15:11:13.779339] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.456 [2024-07-23 15:11:13.779441] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.456 [2024-07-23 15:11:13.779503] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.456 [2024-07-23 15:11:13.779518] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name raid_bdev1, state offline 00:17:18.456 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.456 15:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:18.751 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:18.751 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:18.751 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:18.751 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:19.022 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.022 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:19.022 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.022 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:19.281 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:19.281 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:19.539 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:19.798 [2024-07-23 15:11:14.983582] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:19.798 [2024-07-23 15:11:14.985739] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:19.798 [2024-07-23 15:11:14.985811] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:19.798 [2024-07-23 15:11:14.985866] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:19.798 [2024-07-23 15:11:14.985915] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:19.798 [2024-07-23 15:11:14.985942] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:19.798 [2024-07-23 15:11:14.985958] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.798 [2024-07-23 15:11:14.985972] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state configuring 00:17:19.798 request: 00:17:19.798 { 00:17:19.798 "name": "raid_bdev1", 00:17:19.798 "raid_level": "raid0", 00:17:19.798 "base_bdevs": [ 00:17:19.798 "malloc1", 00:17:19.798 "malloc2", 00:17:19.798 "malloc3" 00:17:19.798 ], 00:17:19.798 "strip_size_kb": 64, 00:17:19.798 "superblock": false, 00:17:19.798 "method": "bdev_raid_create", 00:17:19.798 "req_id": 1 00:17:19.798 } 00:17:19.798 Got JSON-RPC error response 00:17:19.798 response: 00:17:19.798 { 00:17:19.798 "code": -17, 00:17:19.798 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:19.798 } 00:17:19.798 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:19.798 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:19.798 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:19.798 15:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:19.798 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.798 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:19.798 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:19.798 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:19.798 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.057 [2024-07-23 15:11:15.347558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.057 [2024-07-23 15:11:15.347638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.057 [2024-07-23 15:11:15.347661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:17:20.057 [2024-07-23 15:11:15.347676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.057 [2024-07-23 15:11:15.350098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.057 [2024-07-23 15:11:15.350139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.057 [2024-07-23 15:11:15.350217] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:20.057 [2024-07-23 15:11:15.350258] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.057 pt1 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.057 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.317 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.317 "name": "raid_bdev1", 00:17:20.317 "uuid": "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5", 00:17:20.317 "strip_size_kb": 64, 00:17:20.317 "state": "configuring", 00:17:20.317 "raid_level": "raid0", 00:17:20.317 "superblock": true, 00:17:20.317 "num_base_bdevs": 3, 00:17:20.317 "num_base_bdevs_discovered": 1, 00:17:20.317 "num_base_bdevs_operational": 3, 00:17:20.317 "base_bdevs_list": [ 00:17:20.317 { 00:17:20.317 "name": "pt1", 00:17:20.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.317 "is_configured": true, 00:17:20.317 "data_offset": 2048, 00:17:20.317 "data_size": 63488 00:17:20.317 }, 00:17:20.317 { 00:17:20.317 "name": null, 00:17:20.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.317 "is_configured": false, 00:17:20.317 "data_offset": 2048, 00:17:20.317 "data_size": 63488 00:17:20.317 }, 00:17:20.317 { 00:17:20.317 "name": null, 00:17:20.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.317 "is_configured": false, 00:17:20.317 "data_offset": 2048, 00:17:20.317 "data_size": 63488 00:17:20.317 } 00:17:20.317 ] 00:17:20.317 }' 00:17:20.317 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.317 15:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.575 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:17:20.575 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.575 [2024-07-23 15:11:15.975710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.575 [2024-07-23 15:11:15.975808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.575 [2024-07-23 15:11:15.975836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:17:20.575 [2024-07-23 15:11:15.975852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.575 [2024-07-23 15:11:15.976260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.575 [2024-07-23 15:11:15.976286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.575 [2024-07-23 15:11:15.976358] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.575 [2024-07-23 15:11:15.976385] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.575 pt2 00:17:20.575 15:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:20.834 [2024-07-23 15:11:16.223859] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.834 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.093 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.093 "name": "raid_bdev1", 00:17:21.093 "uuid": "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5", 00:17:21.093 "strip_size_kb": 64, 00:17:21.093 "state": "configuring", 00:17:21.093 "raid_level": "raid0", 00:17:21.093 "superblock": true, 00:17:21.093 "num_base_bdevs": 3, 00:17:21.093 "num_base_bdevs_discovered": 1, 00:17:21.093 "num_base_bdevs_operational": 3, 00:17:21.093 "base_bdevs_list": [ 00:17:21.093 { 00:17:21.093 "name": "pt1", 00:17:21.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.093 "is_configured": true, 00:17:21.093 "data_offset": 2048, 00:17:21.093 "data_size": 63488 00:17:21.093 }, 00:17:21.093 { 00:17:21.093 "name": null, 00:17:21.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.093 "is_configured": false, 00:17:21.093 "data_offset": 2048, 00:17:21.093 "data_size": 63488 00:17:21.093 }, 00:17:21.093 { 00:17:21.093 "name": null, 00:17:21.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.093 "is_configured": false, 00:17:21.093 "data_offset": 2048, 00:17:21.093 "data_size": 63488 00:17:21.093 } 00:17:21.093 ] 00:17:21.093 }' 00:17:21.093 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.093 15:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.352 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:21.352 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:21.353 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.611 [2024-07-23 15:11:16.843945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.611 [2024-07-23 15:11:16.844028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.611 [2024-07-23 15:11:16.844056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:17:21.611 [2024-07-23 15:11:16.844069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.611 [2024-07-23 15:11:16.844481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.611 [2024-07-23 15:11:16.844501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.611 [2024-07-23 15:11:16.844573] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:21.611 [2024-07-23 15:11:16.844595] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.611 pt2 00:17:21.611 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:21.611 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:21.611 15:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:21.869 [2024-07-23 15:11:17.099974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:21.869 [2024-07-23 15:11:17.100045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.869 [2024-07-23 15:11:17.100071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:17:21.869 [2024-07-23 15:11:17.100083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.870 [2024-07-23 15:11:17.100686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.870 [2024-07-23 15:11:17.100706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:21.870 [2024-07-23 15:11:17.100785] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:21.870 [2024-07-23 15:11:17.100825] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:21.870 [2024-07-23 15:11:17.100939] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:17:21.870 [2024-07-23 15:11:17.100949] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:21.870 [2024-07-23 15:11:17.101021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:17:21.870 [2024-07-23 15:11:17.101296] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:17:21.870 [2024-07-23 15:11:17.101311] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:17:21.870 [2024-07-23 15:11:17.101400] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.870 pt3 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.870 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.130 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.130 "name": "raid_bdev1", 00:17:22.130 "uuid": "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5", 00:17:22.130 "strip_size_kb": 64, 00:17:22.130 "state": "online", 00:17:22.130 "raid_level": "raid0", 00:17:22.130 "superblock": true, 00:17:22.130 "num_base_bdevs": 3, 00:17:22.130 "num_base_bdevs_discovered": 3, 00:17:22.130 "num_base_bdevs_operational": 3, 00:17:22.130 "base_bdevs_list": [ 00:17:22.130 { 00:17:22.130 "name": "pt1", 00:17:22.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.130 "is_configured": true, 00:17:22.130 "data_offset": 2048, 00:17:22.130 "data_size": 63488 00:17:22.130 }, 00:17:22.130 { 00:17:22.130 "name": "pt2", 00:17:22.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.130 "is_configured": true, 00:17:22.130 "data_offset": 2048, 00:17:22.130 "data_size": 63488 00:17:22.130 }, 00:17:22.130 { 00:17:22.130 "name": "pt3", 00:17:22.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.130 "is_configured": true, 00:17:22.130 "data_offset": 2048, 00:17:22.130 "data_size": 63488 00:17:22.130 } 00:17:22.130 ] 00:17:22.130 }' 00:17:22.130 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.130 15:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.390 [2024-07-23 15:11:17.784423] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:22.390 "name": "raid_bdev1", 00:17:22.390 "aliases": [ 00:17:22.390 "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5" 00:17:22.390 ], 00:17:22.390 "product_name": "Raid Volume", 00:17:22.390 "block_size": 512, 00:17:22.390 "num_blocks": 190464, 00:17:22.390 "uuid": "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5", 00:17:22.390 "assigned_rate_limits": { 00:17:22.390 "rw_ios_per_sec": 0, 00:17:22.390 "rw_mbytes_per_sec": 0, 00:17:22.390 "r_mbytes_per_sec": 0, 00:17:22.390 "w_mbytes_per_sec": 0 00:17:22.390 }, 00:17:22.390 "claimed": false, 00:17:22.390 "zoned": false, 00:17:22.390 "supported_io_types": { 00:17:22.390 "read": true, 00:17:22.390 "write": true, 00:17:22.390 "unmap": true, 00:17:22.390 "flush": true, 00:17:22.390 "reset": true, 00:17:22.390 "nvme_admin": false, 00:17:22.390 "nvme_io": false, 00:17:22.390 "nvme_io_md": false, 00:17:22.390 "write_zeroes": true, 00:17:22.390 "zcopy": false, 00:17:22.390 "get_zone_info": false, 00:17:22.390 "zone_management": false, 00:17:22.390 "zone_append": false, 00:17:22.390 "compare": false, 00:17:22.390 "compare_and_write": false, 00:17:22.390 "abort": false, 00:17:22.390 "seek_hole": false, 00:17:22.390 "seek_data": false, 00:17:22.390 "copy": false, 00:17:22.390 "nvme_iov_md": false 00:17:22.390 }, 00:17:22.390 "memory_domains": [ 00:17:22.390 { 00:17:22.390 "dma_device_id": "system", 00:17:22.390 "dma_device_type": 1 00:17:22.390 }, 00:17:22.390 { 00:17:22.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.390 "dma_device_type": 2 00:17:22.390 }, 00:17:22.390 { 00:17:22.390 "dma_device_id": "system", 00:17:22.390 "dma_device_type": 1 00:17:22.390 }, 00:17:22.390 { 00:17:22.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.390 "dma_device_type": 2 00:17:22.390 }, 00:17:22.390 { 00:17:22.390 "dma_device_id": "system", 00:17:22.390 "dma_device_type": 1 00:17:22.390 }, 00:17:22.390 { 00:17:22.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.390 "dma_device_type": 2 00:17:22.390 } 00:17:22.390 ], 00:17:22.390 "driver_specific": { 00:17:22.390 "raid": { 00:17:22.390 "uuid": "b7c7d6c6-01f1-4b40-a15e-adad202ed1a5", 00:17:22.390 "strip_size_kb": 64, 00:17:22.390 "state": "online", 00:17:22.390 "raid_level": "raid0", 00:17:22.390 "superblock": true, 00:17:22.390 "num_base_bdevs": 3, 00:17:22.390 "num_base_bdevs_discovered": 3, 00:17:22.390 "num_base_bdevs_operational": 3, 00:17:22.390 "base_bdevs_list": [ 00:17:22.390 { 00:17:22.390 "name": "pt1", 00:17:22.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.390 "is_configured": true, 00:17:22.390 "data_offset": 2048, 00:17:22.390 "data_size": 63488 00:17:22.390 }, 00:17:22.390 { 00:17:22.390 "name": "pt2", 00:17:22.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.390 "is_configured": true, 00:17:22.390 "data_offset": 2048, 00:17:22.390 "data_size": 63488 00:17:22.390 }, 00:17:22.390 { 00:17:22.390 "name": "pt3", 00:17:22.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.390 "is_configured": true, 00:17:22.390 "data_offset": 2048, 00:17:22.390 "data_size": 63488 00:17:22.390 } 00:17:22.390 ] 00:17:22.390 } 00:17:22.390 } 00:17:22.390 }' 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:22.390 pt2 00:17:22.390 pt3' 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:22.390 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.649 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.649 "name": "pt1", 00:17:22.649 "aliases": [ 00:17:22.649 "00000000-0000-0000-0000-000000000001" 00:17:22.649 ], 00:17:22.649 "product_name": "passthru", 00:17:22.649 "block_size": 512, 00:17:22.649 "num_blocks": 65536, 00:17:22.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.649 "assigned_rate_limits": { 00:17:22.649 "rw_ios_per_sec": 0, 00:17:22.649 "rw_mbytes_per_sec": 0, 00:17:22.649 "r_mbytes_per_sec": 0, 00:17:22.649 "w_mbytes_per_sec": 0 00:17:22.649 }, 00:17:22.649 "claimed": true, 00:17:22.649 "claim_type": "exclusive_write", 00:17:22.649 "zoned": false, 00:17:22.649 "supported_io_types": { 00:17:22.649 "read": true, 00:17:22.649 "write": true, 00:17:22.649 "unmap": true, 00:17:22.649 "flush": true, 00:17:22.649 "reset": true, 00:17:22.649 "nvme_admin": false, 00:17:22.649 "nvme_io": false, 00:17:22.649 "nvme_io_md": false, 00:17:22.649 "write_zeroes": true, 00:17:22.649 "zcopy": true, 00:17:22.649 "get_zone_info": false, 00:17:22.649 "zone_management": false, 00:17:22.649 "zone_append": false, 00:17:22.649 "compare": false, 00:17:22.649 "compare_and_write": false, 00:17:22.649 "abort": true, 00:17:22.649 "seek_hole": false, 00:17:22.649 "seek_data": false, 00:17:22.649 "copy": true, 00:17:22.649 "nvme_iov_md": false 00:17:22.649 }, 00:17:22.649 "memory_domains": [ 00:17:22.649 { 00:17:22.649 "dma_device_id": "system", 00:17:22.649 "dma_device_type": 1 00:17:22.649 }, 00:17:22.649 { 00:17:22.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.649 "dma_device_type": 2 00:17:22.649 } 00:17:22.649 ], 00:17:22.649 "driver_specific": { 00:17:22.649 "passthru": { 00:17:22.649 "name": "pt1", 00:17:22.649 "base_bdev_name": "malloc1" 00:17:22.649 } 00:17:22.649 } 00:17:22.649 }' 00:17:22.649 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.649 15:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.649 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.908 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.908 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.908 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:22.908 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:23.166 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:23.166 "name": "pt2", 00:17:23.166 "aliases": [ 00:17:23.166 "00000000-0000-0000-0000-000000000002" 00:17:23.166 ], 00:17:23.166 "product_name": "passthru", 00:17:23.166 "block_size": 512, 00:17:23.166 "num_blocks": 65536, 00:17:23.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.166 "assigned_rate_limits": { 00:17:23.166 "rw_ios_per_sec": 0, 00:17:23.166 "rw_mbytes_per_sec": 0, 00:17:23.166 "r_mbytes_per_sec": 0, 00:17:23.166 "w_mbytes_per_sec": 0 00:17:23.166 }, 00:17:23.166 "claimed": true, 00:17:23.166 "claim_type": "exclusive_write", 00:17:23.166 "zoned": false, 00:17:23.166 "supported_io_types": { 00:17:23.166 "read": true, 00:17:23.167 "write": true, 00:17:23.167 "unmap": true, 00:17:23.167 "flush": true, 00:17:23.167 "reset": true, 00:17:23.167 "nvme_admin": false, 00:17:23.167 "nvme_io": false, 00:17:23.167 "nvme_io_md": false, 00:17:23.167 "write_zeroes": true, 00:17:23.167 "zcopy": true, 00:17:23.167 "get_zone_info": false, 00:17:23.167 "zone_management": false, 00:17:23.167 "zone_append": false, 00:17:23.167 "compare": false, 00:17:23.167 "compare_and_write": false, 00:17:23.167 "abort": true, 00:17:23.167 "seek_hole": false, 00:17:23.167 "seek_data": false, 00:17:23.167 "copy": true, 00:17:23.167 "nvme_iov_md": false 00:17:23.167 }, 00:17:23.167 "memory_domains": [ 00:17:23.167 { 00:17:23.167 "dma_device_id": "system", 00:17:23.167 "dma_device_type": 1 00:17:23.167 }, 00:17:23.167 { 00:17:23.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.167 "dma_device_type": 2 00:17:23.167 } 00:17:23.167 ], 00:17:23.167 "driver_specific": { 00:17:23.167 "passthru": { 00:17:23.167 "name": "pt2", 00:17:23.167 "base_bdev_name": "malloc2" 00:17:23.167 } 00:17:23.167 } 00:17:23.167 }' 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:23.167 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:23.425 "name": "pt3", 00:17:23.425 "aliases": [ 00:17:23.425 "00000000-0000-0000-0000-000000000003" 00:17:23.425 ], 00:17:23.425 "product_name": "passthru", 00:17:23.425 "block_size": 512, 00:17:23.425 "num_blocks": 65536, 00:17:23.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:23.425 "assigned_rate_limits": { 00:17:23.425 "rw_ios_per_sec": 0, 00:17:23.425 "rw_mbytes_per_sec": 0, 00:17:23.425 "r_mbytes_per_sec": 0, 00:17:23.425 "w_mbytes_per_sec": 0 00:17:23.425 }, 00:17:23.425 "claimed": true, 00:17:23.425 "claim_type": "exclusive_write", 00:17:23.425 "zoned": false, 00:17:23.425 "supported_io_types": { 00:17:23.425 "read": true, 00:17:23.425 "write": true, 00:17:23.425 "unmap": true, 00:17:23.425 "flush": true, 00:17:23.425 "reset": true, 00:17:23.425 "nvme_admin": false, 00:17:23.425 "nvme_io": false, 00:17:23.425 "nvme_io_md": false, 00:17:23.425 "write_zeroes": true, 00:17:23.425 "zcopy": true, 00:17:23.425 "get_zone_info": false, 00:17:23.425 "zone_management": false, 00:17:23.425 "zone_append": false, 00:17:23.425 "compare": false, 00:17:23.425 "compare_and_write": false, 00:17:23.425 "abort": true, 00:17:23.425 "seek_hole": false, 00:17:23.425 "seek_data": false, 00:17:23.425 "copy": true, 00:17:23.425 "nvme_iov_md": false 00:17:23.425 }, 00:17:23.425 "memory_domains": [ 00:17:23.425 { 00:17:23.425 "dma_device_id": "system", 00:17:23.425 "dma_device_type": 1 00:17:23.425 }, 00:17:23.425 { 00:17:23.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.425 "dma_device_type": 2 00:17:23.425 } 00:17:23.425 ], 00:17:23.425 "driver_specific": { 00:17:23.425 "passthru": { 00:17:23.425 "name": "pt3", 00:17:23.425 "base_bdev_name": "malloc3" 00:17:23.425 } 00:17:23.425 } 00:17:23.425 }' 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:23.425 15:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:23.684 [2024-07-23 15:11:19.040728] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' b7c7d6c6-01f1-4b40-a15e-adad202ed1a5 '!=' b7c7d6c6-01f1-4b40-a15e-adad202ed1a5 ']' 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 93033 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 93033 ']' 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 93033 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:23.684 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.685 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93033 00:17:23.685 killing process with pid 93033 00:17:23.685 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.685 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.685 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93033' 00:17:23.685 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 93033 00:17:23.685 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 93033 00:17:23.685 [2024-07-23 15:11:19.102226] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:23.685 [2024-07-23 15:11:19.102317] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.685 [2024-07-23 15:11:19.102375] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.685 [2024-07-23 15:11:19.102386] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:17:23.943 [2024-07-23 15:11:19.138335] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.202 15:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:24.203 00:17:24.203 real 0m10.243s 00:17:24.203 user 0m17.363s 00:17:24.203 sys 0m2.228s 00:17:24.203 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.203 15:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.203 ************************************ 00:17:24.203 END TEST raid_superblock_test 00:17:24.203 ************************************ 00:17:24.203 15:11:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:24.203 15:11:19 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:17:24.203 15:11:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:24.203 15:11:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.203 15:11:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.203 ************************************ 00:17:24.203 START TEST raid_read_error_test 00:17:24.203 ************************************ 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.49aZKR44rJ 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=93447 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 93447 /var/tmp/spdk-raid.sock 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 93447 ']' 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.203 15:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.203 [2024-07-23 15:11:19.516663] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:17:24.203 [2024-07-23 15:11:19.516901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93447 ] 00:17:24.462 [2024-07-23 15:11:19.668919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.462 [2024-07-23 15:11:19.715968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.462 [2024-07-23 15:11:19.761726] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.029 15:11:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.029 15:11:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:25.029 15:11:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:25.029 15:11:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:25.356 BaseBdev1_malloc 00:17:25.356 15:11:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:25.643 true 00:17:25.643 15:11:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:25.643 [2024-07-23 15:11:20.945940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:25.643 [2024-07-23 15:11:20.946017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.643 [2024-07-23 15:11:20.946049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:17:25.643 [2024-07-23 15:11:20.946062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.643 [2024-07-23 15:11:20.948813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.643 [2024-07-23 15:11:20.948856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.643 BaseBdev1 00:17:25.643 15:11:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:25.643 15:11:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:25.903 BaseBdev2_malloc 00:17:25.903 15:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:26.161 true 00:17:26.161 15:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:26.161 [2024-07-23 15:11:21.547667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:26.161 [2024-07-23 15:11:21.547741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.161 [2024-07-23 15:11:21.547771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:17:26.161 [2024-07-23 15:11:21.547783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.161 [2024-07-23 15:11:21.550305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.161 [2024-07-23 15:11:21.550347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:26.161 BaseBdev2 00:17:26.161 15:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:26.161 15:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:26.419 BaseBdev3_malloc 00:17:26.419 15:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:26.678 true 00:17:26.678 15:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:26.678 [2024-07-23 15:11:22.088390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:26.678 [2024-07-23 15:11:22.088467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.678 [2024-07-23 15:11:22.088497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:17:26.678 [2024-07-23 15:11:22.088509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.678 [2024-07-23 15:11:22.091157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.678 [2024-07-23 15:11:22.091201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:26.678 BaseBdev3 00:17:26.936 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:26.936 [2024-07-23 15:11:22.268489] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.936 [2024-07-23 15:11:22.270719] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.936 [2024-07-23 15:11:22.270820] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.936 [2024-07-23 15:11:22.271024] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:17:26.936 [2024-07-23 15:11:22.271053] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:26.936 [2024-07-23 15:11:22.271192] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:17:26.936 [2024-07-23 15:11:22.271540] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:17:26.936 [2024-07-23 15:11:22.271561] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:17:26.936 [2024-07-23 15:11:22.271683] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.937 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.195 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.195 "name": "raid_bdev1", 00:17:27.195 "uuid": "4b9915e5-b7ed-4666-86dc-f27dabb714c9", 00:17:27.195 "strip_size_kb": 64, 00:17:27.195 "state": "online", 00:17:27.195 "raid_level": "raid0", 00:17:27.195 "superblock": true, 00:17:27.195 "num_base_bdevs": 3, 00:17:27.195 "num_base_bdevs_discovered": 3, 00:17:27.195 "num_base_bdevs_operational": 3, 00:17:27.195 "base_bdevs_list": [ 00:17:27.195 { 00:17:27.195 "name": "BaseBdev1", 00:17:27.195 "uuid": "550424a4-368c-588a-9c97-eb2bc6ad23a6", 00:17:27.195 "is_configured": true, 00:17:27.195 "data_offset": 2048, 00:17:27.195 "data_size": 63488 00:17:27.195 }, 00:17:27.195 { 00:17:27.195 "name": "BaseBdev2", 00:17:27.195 "uuid": "bd64d3b7-0a9e-5bf6-ba57-096cf708dc2a", 00:17:27.195 "is_configured": true, 00:17:27.195 "data_offset": 2048, 00:17:27.195 "data_size": 63488 00:17:27.195 }, 00:17:27.195 { 00:17:27.195 "name": "BaseBdev3", 00:17:27.195 "uuid": "269bdb1b-6002-51b0-8b1c-2f5249d531ed", 00:17:27.195 "is_configured": true, 00:17:27.195 "data_offset": 2048, 00:17:27.195 "data_size": 63488 00:17:27.195 } 00:17:27.195 ] 00:17:27.195 }' 00:17:27.195 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.195 15:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:27.453 15:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:27.711 [2024-07-23 15:11:22.917046] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:17:28.646 15:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.646 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.904 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.904 "name": "raid_bdev1", 00:17:28.904 "uuid": "4b9915e5-b7ed-4666-86dc-f27dabb714c9", 00:17:28.904 "strip_size_kb": 64, 00:17:28.904 "state": "online", 00:17:28.904 "raid_level": "raid0", 00:17:28.904 "superblock": true, 00:17:28.904 "num_base_bdevs": 3, 00:17:28.904 "num_base_bdevs_discovered": 3, 00:17:28.904 "num_base_bdevs_operational": 3, 00:17:28.904 "base_bdevs_list": [ 00:17:28.904 { 00:17:28.904 "name": "BaseBdev1", 00:17:28.904 "uuid": "550424a4-368c-588a-9c97-eb2bc6ad23a6", 00:17:28.904 "is_configured": true, 00:17:28.904 "data_offset": 2048, 00:17:28.904 "data_size": 63488 00:17:28.904 }, 00:17:28.904 { 00:17:28.904 "name": "BaseBdev2", 00:17:28.904 "uuid": "bd64d3b7-0a9e-5bf6-ba57-096cf708dc2a", 00:17:28.904 "is_configured": true, 00:17:28.904 "data_offset": 2048, 00:17:28.904 "data_size": 63488 00:17:28.904 }, 00:17:28.904 { 00:17:28.904 "name": "BaseBdev3", 00:17:28.904 "uuid": "269bdb1b-6002-51b0-8b1c-2f5249d531ed", 00:17:28.904 "is_configured": true, 00:17:28.904 "data_offset": 2048, 00:17:28.904 "data_size": 63488 00:17:28.904 } 00:17:28.904 ] 00:17:28.904 }' 00:17:28.904 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.904 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.162 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:29.420 [2024-07-23 15:11:24.710559] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.420 [2024-07-23 15:11:24.710617] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.420 [2024-07-23 15:11:24.713043] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.420 [2024-07-23 15:11:24.713096] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.420 [2024-07-23 15:11:24.713132] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.420 [2024-07-23 15:11:24.713147] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:17:29.420 0 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 93447 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 93447 ']' 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 93447 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93447 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:29.420 killing process with pid 93447 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93447' 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 93447 00:17:29.420 [2024-07-23 15:11:24.768186] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.420 15:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 93447 00:17:29.420 [2024-07-23 15:11:24.794213] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.49aZKR44rJ 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:17:29.679 00:17:29.679 real 0m5.612s 00:17:29.679 user 0m8.403s 00:17:29.679 sys 0m1.011s 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:29.679 15:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.679 ************************************ 00:17:29.679 END TEST raid_read_error_test 00:17:29.679 ************************************ 00:17:29.679 15:11:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:29.679 15:11:25 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:17:29.679 15:11:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:29.679 15:11:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.679 15:11:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.938 ************************************ 00:17:29.938 START TEST raid_write_error_test 00:17:29.938 ************************************ 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.byIRs0Ze84 00:17:29.938 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=93610 00:17:29.939 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 93610 /var/tmp/spdk-raid.sock 00:17:29.939 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:29.939 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 93610 ']' 00:17:29.939 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:29.939 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:29.939 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:29.939 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.939 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.939 [2024-07-23 15:11:25.176949] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:17:29.939 [2024-07-23 15:11:25.177098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93610 ] 00:17:29.939 [2024-07-23 15:11:25.320250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.939 [2024-07-23 15:11:25.365611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.197 [2024-07-23 15:11:25.411684] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.197 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.197 15:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:30.197 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:30.197 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:30.456 BaseBdev1_malloc 00:17:30.456 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:30.456 true 00:17:30.715 15:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:30.715 [2024-07-23 15:11:26.047942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:30.715 [2024-07-23 15:11:26.048027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.715 [2024-07-23 15:11:26.048061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:17:30.715 [2024-07-23 15:11:26.048073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.715 [2024-07-23 15:11:26.050640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.715 [2024-07-23 15:11:26.050689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.715 BaseBdev1 00:17:30.715 15:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:30.715 15:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:30.974 BaseBdev2_malloc 00:17:30.974 15:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:31.232 true 00:17:31.232 15:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:31.232 [2024-07-23 15:11:26.657836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:31.232 [2024-07-23 15:11:26.657915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.232 [2024-07-23 15:11:26.657947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:17:31.232 [2024-07-23 15:11:26.657960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.232 [2024-07-23 15:11:26.660446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.232 [2024-07-23 15:11:26.660490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:31.232 BaseBdev2 00:17:31.491 15:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:31.491 15:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:31.491 BaseBdev3_malloc 00:17:31.491 15:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:31.749 true 00:17:31.749 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:32.008 [2024-07-23 15:11:27.247248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:32.008 [2024-07-23 15:11:27.247323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.008 [2024-07-23 15:11:27.247353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:17:32.008 [2024-07-23 15:11:27.247365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.008 [2024-07-23 15:11:27.249828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.008 [2024-07-23 15:11:27.249871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:32.008 BaseBdev3 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:32.008 [2024-07-23 15:11:27.415320] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.008 [2024-07-23 15:11:27.417751] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.008 [2024-07-23 15:11:27.417863] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.008 [2024-07-23 15:11:27.418067] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:17:32.008 [2024-07-23 15:11:27.418086] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:32.008 [2024-07-23 15:11:27.418246] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:17:32.008 [2024-07-23 15:11:27.418595] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:17:32.008 [2024-07-23 15:11:27.418617] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:17:32.008 [2024-07-23 15:11:27.418762] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.008 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.324 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.324 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.324 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.324 "name": "raid_bdev1", 00:17:32.324 "uuid": "7a428bbd-2a9b-4002-878c-ed699d07195f", 00:17:32.324 "strip_size_kb": 64, 00:17:32.324 "state": "online", 00:17:32.324 "raid_level": "raid0", 00:17:32.324 "superblock": true, 00:17:32.324 "num_base_bdevs": 3, 00:17:32.324 "num_base_bdevs_discovered": 3, 00:17:32.325 "num_base_bdevs_operational": 3, 00:17:32.325 "base_bdevs_list": [ 00:17:32.325 { 00:17:32.325 "name": "BaseBdev1", 00:17:32.325 "uuid": "a335eed6-daa4-5c0a-a311-93c18a11a6c5", 00:17:32.325 "is_configured": true, 00:17:32.325 "data_offset": 2048, 00:17:32.325 "data_size": 63488 00:17:32.325 }, 00:17:32.325 { 00:17:32.325 "name": "BaseBdev2", 00:17:32.325 "uuid": "06912218-1289-5f59-a0f4-4d4568e2cbaa", 00:17:32.325 "is_configured": true, 00:17:32.325 "data_offset": 2048, 00:17:32.325 "data_size": 63488 00:17:32.325 }, 00:17:32.325 { 00:17:32.325 "name": "BaseBdev3", 00:17:32.325 "uuid": "8511164a-c5ff-5c9a-b1b0-2e716ee7d5f4", 00:17:32.325 "is_configured": true, 00:17:32.325 "data_offset": 2048, 00:17:32.325 "data_size": 63488 00:17:32.325 } 00:17:32.325 ] 00:17:32.325 }' 00:17:32.325 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.325 15:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.589 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:32.589 15:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:32.848 [2024-07-23 15:11:28.067923] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:17:33.784 15:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.784 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.042 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.042 "name": "raid_bdev1", 00:17:34.042 "uuid": "7a428bbd-2a9b-4002-878c-ed699d07195f", 00:17:34.042 "strip_size_kb": 64, 00:17:34.042 "state": "online", 00:17:34.042 "raid_level": "raid0", 00:17:34.042 "superblock": true, 00:17:34.042 "num_base_bdevs": 3, 00:17:34.042 "num_base_bdevs_discovered": 3, 00:17:34.042 "num_base_bdevs_operational": 3, 00:17:34.042 "base_bdevs_list": [ 00:17:34.042 { 00:17:34.042 "name": "BaseBdev1", 00:17:34.042 "uuid": "a335eed6-daa4-5c0a-a311-93c18a11a6c5", 00:17:34.042 "is_configured": true, 00:17:34.042 "data_offset": 2048, 00:17:34.042 "data_size": 63488 00:17:34.042 }, 00:17:34.042 { 00:17:34.042 "name": "BaseBdev2", 00:17:34.042 "uuid": "06912218-1289-5f59-a0f4-4d4568e2cbaa", 00:17:34.042 "is_configured": true, 00:17:34.042 "data_offset": 2048, 00:17:34.042 "data_size": 63488 00:17:34.042 }, 00:17:34.042 { 00:17:34.042 "name": "BaseBdev3", 00:17:34.042 "uuid": "8511164a-c5ff-5c9a-b1b0-2e716ee7d5f4", 00:17:34.042 "is_configured": true, 00:17:34.042 "data_offset": 2048, 00:17:34.042 "data_size": 63488 00:17:34.042 } 00:17:34.042 ] 00:17:34.042 }' 00:17:34.042 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.043 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:34.610 [2024-07-23 15:11:29.929371] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.610 [2024-07-23 15:11:29.929428] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.610 [2024-07-23 15:11:29.931975] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.610 [2024-07-23 15:11:29.932026] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.610 [2024-07-23 15:11:29.932061] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.610 [2024-07-23 15:11:29.932076] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:17:34.610 0 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 93610 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 93610 ']' 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 93610 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93610 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:34.610 killing process with pid 93610 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93610' 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 93610 00:17:34.610 [2024-07-23 15:11:29.983514] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.610 15:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 93610 00:17:34.610 [2024-07-23 15:11:30.009878] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.byIRs0Ze84 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.54 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.54 != \0\.\0\0 ]] 00:17:34.868 00:17:34.868 real 0m5.142s 00:17:34.868 user 0m7.934s 00:17:34.868 sys 0m0.993s 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:34.868 15:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.868 ************************************ 00:17:34.868 END TEST raid_write_error_test 00:17:34.868 ************************************ 00:17:35.126 15:11:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:35.126 15:11:30 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:35.126 15:11:30 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:35.126 15:11:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:35.126 15:11:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.126 15:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.126 ************************************ 00:17:35.126 START TEST raid_state_function_test 00:17:35.126 ************************************ 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=93764 00:17:35.126 Process raid pid: 93764 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 93764' 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 93764 /var/tmp/spdk-raid.sock 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 93764 ']' 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:35.126 15:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:35.127 15:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.127 15:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.127 [2024-07-23 15:11:30.373203] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:17:35.127 [2024-07-23 15:11:30.373326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.127 [2024-07-23 15:11:30.517086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.385 [2024-07-23 15:11:30.565249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.385 [2024-07-23 15:11:30.611115] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:35.951 [2024-07-23 15:11:31.321468] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.951 [2024-07-23 15:11:31.321535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.951 [2024-07-23 15:11:31.321547] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.951 [2024-07-23 15:11:31.321561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.951 [2024-07-23 15:11:31.321572] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.951 [2024-07-23 15:11:31.321585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.951 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.208 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.208 "name": "Existed_Raid", 00:17:36.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.208 "strip_size_kb": 64, 00:17:36.208 "state": "configuring", 00:17:36.208 "raid_level": "concat", 00:17:36.208 "superblock": false, 00:17:36.208 "num_base_bdevs": 3, 00:17:36.208 "num_base_bdevs_discovered": 0, 00:17:36.208 "num_base_bdevs_operational": 3, 00:17:36.208 "base_bdevs_list": [ 00:17:36.208 { 00:17:36.208 "name": "BaseBdev1", 00:17:36.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.208 "is_configured": false, 00:17:36.208 "data_offset": 0, 00:17:36.208 "data_size": 0 00:17:36.208 }, 00:17:36.208 { 00:17:36.208 "name": "BaseBdev2", 00:17:36.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.208 "is_configured": false, 00:17:36.208 "data_offset": 0, 00:17:36.208 "data_size": 0 00:17:36.208 }, 00:17:36.208 { 00:17:36.208 "name": "BaseBdev3", 00:17:36.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.208 "is_configured": false, 00:17:36.208 "data_offset": 0, 00:17:36.208 "data_size": 0 00:17:36.208 } 00:17:36.208 ] 00:17:36.208 }' 00:17:36.208 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.208 15:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.774 15:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:36.774 [2024-07-23 15:11:32.157508] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.774 [2024-07-23 15:11:32.157563] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:17:36.774 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:37.032 [2024-07-23 15:11:32.337591] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.032 [2024-07-23 15:11:32.337652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.032 [2024-07-23 15:11:32.337663] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.032 [2024-07-23 15:11:32.337676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.032 [2024-07-23 15:11:32.337684] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.032 [2024-07-23 15:11:32.337696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.032 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:37.290 [2024-07-23 15:11:32.511469] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.290 BaseBdev1 00:17:37.290 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:37.290 15:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:37.290 15:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:37.290 15:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:37.290 15:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:37.290 15:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:37.290 15:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:37.290 15:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:37.548 [ 00:17:37.548 { 00:17:37.548 "name": "BaseBdev1", 00:17:37.548 "aliases": [ 00:17:37.548 "e9cc64be-c111-4620-8d1f-0c2efd80b7e0" 00:17:37.548 ], 00:17:37.548 "product_name": "Malloc disk", 00:17:37.548 "block_size": 512, 00:17:37.548 "num_blocks": 65536, 00:17:37.548 "uuid": "e9cc64be-c111-4620-8d1f-0c2efd80b7e0", 00:17:37.548 "assigned_rate_limits": { 00:17:37.548 "rw_ios_per_sec": 0, 00:17:37.548 "rw_mbytes_per_sec": 0, 00:17:37.548 "r_mbytes_per_sec": 0, 00:17:37.548 "w_mbytes_per_sec": 0 00:17:37.548 }, 00:17:37.548 "claimed": true, 00:17:37.548 "claim_type": "exclusive_write", 00:17:37.548 "zoned": false, 00:17:37.548 "supported_io_types": { 00:17:37.548 "read": true, 00:17:37.548 "write": true, 00:17:37.548 "unmap": true, 00:17:37.548 "flush": true, 00:17:37.548 "reset": true, 00:17:37.548 "nvme_admin": false, 00:17:37.548 "nvme_io": false, 00:17:37.548 "nvme_io_md": false, 00:17:37.548 "write_zeroes": true, 00:17:37.548 "zcopy": true, 00:17:37.548 "get_zone_info": false, 00:17:37.548 "zone_management": false, 00:17:37.548 "zone_append": false, 00:17:37.548 "compare": false, 00:17:37.548 "compare_and_write": false, 00:17:37.548 "abort": true, 00:17:37.548 "seek_hole": false, 00:17:37.548 "seek_data": false, 00:17:37.548 "copy": true, 00:17:37.548 "nvme_iov_md": false 00:17:37.548 }, 00:17:37.548 "memory_domains": [ 00:17:37.548 { 00:17:37.548 "dma_device_id": "system", 00:17:37.548 "dma_device_type": 1 00:17:37.548 }, 00:17:37.548 { 00:17:37.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.548 "dma_device_type": 2 00:17:37.548 } 00:17:37.548 ], 00:17:37.548 "driver_specific": {} 00:17:37.548 } 00:17:37.548 ] 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.548 15:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.806 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.806 "name": "Existed_Raid", 00:17:37.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.806 "strip_size_kb": 64, 00:17:37.806 "state": "configuring", 00:17:37.806 "raid_level": "concat", 00:17:37.806 "superblock": false, 00:17:37.806 "num_base_bdevs": 3, 00:17:37.806 "num_base_bdevs_discovered": 1, 00:17:37.806 "num_base_bdevs_operational": 3, 00:17:37.806 "base_bdevs_list": [ 00:17:37.806 { 00:17:37.806 "name": "BaseBdev1", 00:17:37.806 "uuid": "e9cc64be-c111-4620-8d1f-0c2efd80b7e0", 00:17:37.806 "is_configured": true, 00:17:37.806 "data_offset": 0, 00:17:37.806 "data_size": 65536 00:17:37.806 }, 00:17:37.806 { 00:17:37.806 "name": "BaseBdev2", 00:17:37.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.806 "is_configured": false, 00:17:37.806 "data_offset": 0, 00:17:37.806 "data_size": 0 00:17:37.806 }, 00:17:37.806 { 00:17:37.806 "name": "BaseBdev3", 00:17:37.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.806 "is_configured": false, 00:17:37.806 "data_offset": 0, 00:17:37.806 "data_size": 0 00:17:37.806 } 00:17:37.806 ] 00:17:37.806 }' 00:17:37.806 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.806 15:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.063 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:38.320 [2024-07-23 15:11:33.723864] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.320 [2024-07-23 15:11:33.723929] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:17:38.320 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:38.578 [2024-07-23 15:11:33.959975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.578 [2024-07-23 15:11:33.962207] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.578 [2024-07-23 15:11:33.962254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.578 [2024-07-23 15:11:33.962265] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.578 [2024-07-23 15:11:33.962279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.578 15:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.835 15:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.835 "name": "Existed_Raid", 00:17:38.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.835 "strip_size_kb": 64, 00:17:38.835 "state": "configuring", 00:17:38.835 "raid_level": "concat", 00:17:38.835 "superblock": false, 00:17:38.835 "num_base_bdevs": 3, 00:17:38.835 "num_base_bdevs_discovered": 1, 00:17:38.835 "num_base_bdevs_operational": 3, 00:17:38.835 "base_bdevs_list": [ 00:17:38.835 { 00:17:38.835 "name": "BaseBdev1", 00:17:38.835 "uuid": "e9cc64be-c111-4620-8d1f-0c2efd80b7e0", 00:17:38.835 "is_configured": true, 00:17:38.835 "data_offset": 0, 00:17:38.835 "data_size": 65536 00:17:38.835 }, 00:17:38.835 { 00:17:38.835 "name": "BaseBdev2", 00:17:38.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.835 "is_configured": false, 00:17:38.835 "data_offset": 0, 00:17:38.835 "data_size": 0 00:17:38.835 }, 00:17:38.835 { 00:17:38.835 "name": "BaseBdev3", 00:17:38.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.835 "is_configured": false, 00:17:38.835 "data_offset": 0, 00:17:38.835 "data_size": 0 00:17:38.835 } 00:17:38.835 ] 00:17:38.835 }' 00:17:38.835 15:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.835 15:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.416 15:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:39.416 [2024-07-23 15:11:34.817436] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.416 BaseBdev2 00:17:39.416 15:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:39.416 15:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:39.416 15:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:39.416 15:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:39.416 15:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:39.416 15:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:39.416 15:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.674 15:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.932 [ 00:17:39.932 { 00:17:39.932 "name": "BaseBdev2", 00:17:39.932 "aliases": [ 00:17:39.932 "7d00b050-f3ce-4f2a-bfba-d16c0eca3efa" 00:17:39.932 ], 00:17:39.932 "product_name": "Malloc disk", 00:17:39.932 "block_size": 512, 00:17:39.932 "num_blocks": 65536, 00:17:39.932 "uuid": "7d00b050-f3ce-4f2a-bfba-d16c0eca3efa", 00:17:39.932 "assigned_rate_limits": { 00:17:39.932 "rw_ios_per_sec": 0, 00:17:39.932 "rw_mbytes_per_sec": 0, 00:17:39.932 "r_mbytes_per_sec": 0, 00:17:39.932 "w_mbytes_per_sec": 0 00:17:39.932 }, 00:17:39.932 "claimed": true, 00:17:39.932 "claim_type": "exclusive_write", 00:17:39.932 "zoned": false, 00:17:39.932 "supported_io_types": { 00:17:39.932 "read": true, 00:17:39.932 "write": true, 00:17:39.932 "unmap": true, 00:17:39.932 "flush": true, 00:17:39.932 "reset": true, 00:17:39.932 "nvme_admin": false, 00:17:39.932 "nvme_io": false, 00:17:39.932 "nvme_io_md": false, 00:17:39.932 "write_zeroes": true, 00:17:39.932 "zcopy": true, 00:17:39.932 "get_zone_info": false, 00:17:39.932 "zone_management": false, 00:17:39.932 "zone_append": false, 00:17:39.932 "compare": false, 00:17:39.932 "compare_and_write": false, 00:17:39.932 "abort": true, 00:17:39.932 "seek_hole": false, 00:17:39.932 "seek_data": false, 00:17:39.932 "copy": true, 00:17:39.932 "nvme_iov_md": false 00:17:39.932 }, 00:17:39.932 "memory_domains": [ 00:17:39.932 { 00:17:39.932 "dma_device_id": "system", 00:17:39.932 "dma_device_type": 1 00:17:39.932 }, 00:17:39.932 { 00:17:39.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.932 "dma_device_type": 2 00:17:39.932 } 00:17:39.932 ], 00:17:39.932 "driver_specific": {} 00:17:39.932 } 00:17:39.932 ] 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.932 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.191 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.191 "name": "Existed_Raid", 00:17:40.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.191 "strip_size_kb": 64, 00:17:40.191 "state": "configuring", 00:17:40.191 "raid_level": "concat", 00:17:40.191 "superblock": false, 00:17:40.191 "num_base_bdevs": 3, 00:17:40.191 "num_base_bdevs_discovered": 2, 00:17:40.191 "num_base_bdevs_operational": 3, 00:17:40.191 "base_bdevs_list": [ 00:17:40.191 { 00:17:40.191 "name": "BaseBdev1", 00:17:40.191 "uuid": "e9cc64be-c111-4620-8d1f-0c2efd80b7e0", 00:17:40.191 "is_configured": true, 00:17:40.191 "data_offset": 0, 00:17:40.191 "data_size": 65536 00:17:40.191 }, 00:17:40.191 { 00:17:40.191 "name": "BaseBdev2", 00:17:40.191 "uuid": "7d00b050-f3ce-4f2a-bfba-d16c0eca3efa", 00:17:40.191 "is_configured": true, 00:17:40.191 "data_offset": 0, 00:17:40.191 "data_size": 65536 00:17:40.191 }, 00:17:40.191 { 00:17:40.191 "name": "BaseBdev3", 00:17:40.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.191 "is_configured": false, 00:17:40.191 "data_offset": 0, 00:17:40.191 "data_size": 0 00:17:40.191 } 00:17:40.191 ] 00:17:40.191 }' 00:17:40.191 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.191 15:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.450 15:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:40.708 [2024-07-23 15:11:36.021287] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.708 [2024-07-23 15:11:36.021342] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:17:40.708 [2024-07-23 15:11:36.021356] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:40.708 [2024-07-23 15:11:36.021474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:17:40.708 [2024-07-23 15:11:36.021864] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:17:40.708 [2024-07-23 15:11:36.021888] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:17:40.708 [2024-07-23 15:11:36.022135] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.708 BaseBdev3 00:17:40.708 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:40.708 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:40.708 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:40.708 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:40.708 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:40.708 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:40.708 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.967 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:41.226 [ 00:17:41.226 { 00:17:41.226 "name": "BaseBdev3", 00:17:41.226 "aliases": [ 00:17:41.226 "10b795ee-1fee-4cf2-ab64-e4486183c4c6" 00:17:41.226 ], 00:17:41.226 "product_name": "Malloc disk", 00:17:41.226 "block_size": 512, 00:17:41.226 "num_blocks": 65536, 00:17:41.226 "uuid": "10b795ee-1fee-4cf2-ab64-e4486183c4c6", 00:17:41.226 "assigned_rate_limits": { 00:17:41.226 "rw_ios_per_sec": 0, 00:17:41.226 "rw_mbytes_per_sec": 0, 00:17:41.226 "r_mbytes_per_sec": 0, 00:17:41.226 "w_mbytes_per_sec": 0 00:17:41.226 }, 00:17:41.226 "claimed": true, 00:17:41.226 "claim_type": "exclusive_write", 00:17:41.226 "zoned": false, 00:17:41.226 "supported_io_types": { 00:17:41.226 "read": true, 00:17:41.226 "write": true, 00:17:41.226 "unmap": true, 00:17:41.226 "flush": true, 00:17:41.226 "reset": true, 00:17:41.226 "nvme_admin": false, 00:17:41.226 "nvme_io": false, 00:17:41.226 "nvme_io_md": false, 00:17:41.226 "write_zeroes": true, 00:17:41.226 "zcopy": true, 00:17:41.226 "get_zone_info": false, 00:17:41.226 "zone_management": false, 00:17:41.226 "zone_append": false, 00:17:41.226 "compare": false, 00:17:41.226 "compare_and_write": false, 00:17:41.226 "abort": true, 00:17:41.226 "seek_hole": false, 00:17:41.226 "seek_data": false, 00:17:41.226 "copy": true, 00:17:41.226 "nvme_iov_md": false 00:17:41.226 }, 00:17:41.226 "memory_domains": [ 00:17:41.226 { 00:17:41.226 "dma_device_id": "system", 00:17:41.226 "dma_device_type": 1 00:17:41.226 }, 00:17:41.226 { 00:17:41.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.226 "dma_device_type": 2 00:17:41.226 } 00:17:41.226 ], 00:17:41.226 "driver_specific": {} 00:17:41.226 } 00:17:41.226 ] 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.226 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.485 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.485 "name": "Existed_Raid", 00:17:41.485 "uuid": "8b15fc68-8b35-4b0b-8f52-d2aee8453691", 00:17:41.485 "strip_size_kb": 64, 00:17:41.486 "state": "online", 00:17:41.486 "raid_level": "concat", 00:17:41.486 "superblock": false, 00:17:41.486 "num_base_bdevs": 3, 00:17:41.486 "num_base_bdevs_discovered": 3, 00:17:41.486 "num_base_bdevs_operational": 3, 00:17:41.486 "base_bdevs_list": [ 00:17:41.486 { 00:17:41.486 "name": "BaseBdev1", 00:17:41.486 "uuid": "e9cc64be-c111-4620-8d1f-0c2efd80b7e0", 00:17:41.486 "is_configured": true, 00:17:41.486 "data_offset": 0, 00:17:41.486 "data_size": 65536 00:17:41.486 }, 00:17:41.486 { 00:17:41.486 "name": "BaseBdev2", 00:17:41.486 "uuid": "7d00b050-f3ce-4f2a-bfba-d16c0eca3efa", 00:17:41.486 "is_configured": true, 00:17:41.486 "data_offset": 0, 00:17:41.486 "data_size": 65536 00:17:41.486 }, 00:17:41.486 { 00:17:41.486 "name": "BaseBdev3", 00:17:41.486 "uuid": "10b795ee-1fee-4cf2-ab64-e4486183c4c6", 00:17:41.486 "is_configured": true, 00:17:41.486 "data_offset": 0, 00:17:41.486 "data_size": 65536 00:17:41.486 } 00:17:41.486 ] 00:17:41.486 }' 00:17:41.486 15:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.486 15:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.745 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:41.745 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:41.745 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:41.745 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:41.745 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:41.745 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:41.745 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:41.745 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:42.003 [2024-07-23 15:11:37.197981] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.003 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:42.003 "name": "Existed_Raid", 00:17:42.003 "aliases": [ 00:17:42.003 "8b15fc68-8b35-4b0b-8f52-d2aee8453691" 00:17:42.003 ], 00:17:42.003 "product_name": "Raid Volume", 00:17:42.003 "block_size": 512, 00:17:42.003 "num_blocks": 196608, 00:17:42.003 "uuid": "8b15fc68-8b35-4b0b-8f52-d2aee8453691", 00:17:42.003 "assigned_rate_limits": { 00:17:42.003 "rw_ios_per_sec": 0, 00:17:42.003 "rw_mbytes_per_sec": 0, 00:17:42.003 "r_mbytes_per_sec": 0, 00:17:42.003 "w_mbytes_per_sec": 0 00:17:42.004 }, 00:17:42.004 "claimed": false, 00:17:42.004 "zoned": false, 00:17:42.004 "supported_io_types": { 00:17:42.004 "read": true, 00:17:42.004 "write": true, 00:17:42.004 "unmap": true, 00:17:42.004 "flush": true, 00:17:42.004 "reset": true, 00:17:42.004 "nvme_admin": false, 00:17:42.004 "nvme_io": false, 00:17:42.004 "nvme_io_md": false, 00:17:42.004 "write_zeroes": true, 00:17:42.004 "zcopy": false, 00:17:42.004 "get_zone_info": false, 00:17:42.004 "zone_management": false, 00:17:42.004 "zone_append": false, 00:17:42.004 "compare": false, 00:17:42.004 "compare_and_write": false, 00:17:42.004 "abort": false, 00:17:42.004 "seek_hole": false, 00:17:42.004 "seek_data": false, 00:17:42.004 "copy": false, 00:17:42.004 "nvme_iov_md": false 00:17:42.004 }, 00:17:42.004 "memory_domains": [ 00:17:42.004 { 00:17:42.004 "dma_device_id": "system", 00:17:42.004 "dma_device_type": 1 00:17:42.004 }, 00:17:42.004 { 00:17:42.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.004 "dma_device_type": 2 00:17:42.004 }, 00:17:42.004 { 00:17:42.004 "dma_device_id": "system", 00:17:42.004 "dma_device_type": 1 00:17:42.004 }, 00:17:42.004 { 00:17:42.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.004 "dma_device_type": 2 00:17:42.004 }, 00:17:42.004 { 00:17:42.004 "dma_device_id": "system", 00:17:42.004 "dma_device_type": 1 00:17:42.004 }, 00:17:42.004 { 00:17:42.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.004 "dma_device_type": 2 00:17:42.004 } 00:17:42.004 ], 00:17:42.004 "driver_specific": { 00:17:42.004 "raid": { 00:17:42.004 "uuid": "8b15fc68-8b35-4b0b-8f52-d2aee8453691", 00:17:42.004 "strip_size_kb": 64, 00:17:42.004 "state": "online", 00:17:42.004 "raid_level": "concat", 00:17:42.004 "superblock": false, 00:17:42.004 "num_base_bdevs": 3, 00:17:42.004 "num_base_bdevs_discovered": 3, 00:17:42.004 "num_base_bdevs_operational": 3, 00:17:42.004 "base_bdevs_list": [ 00:17:42.004 { 00:17:42.004 "name": "BaseBdev1", 00:17:42.004 "uuid": "e9cc64be-c111-4620-8d1f-0c2efd80b7e0", 00:17:42.004 "is_configured": true, 00:17:42.004 "data_offset": 0, 00:17:42.004 "data_size": 65536 00:17:42.004 }, 00:17:42.004 { 00:17:42.004 "name": "BaseBdev2", 00:17:42.004 "uuid": "7d00b050-f3ce-4f2a-bfba-d16c0eca3efa", 00:17:42.004 "is_configured": true, 00:17:42.004 "data_offset": 0, 00:17:42.004 "data_size": 65536 00:17:42.004 }, 00:17:42.004 { 00:17:42.004 "name": "BaseBdev3", 00:17:42.004 "uuid": "10b795ee-1fee-4cf2-ab64-e4486183c4c6", 00:17:42.004 "is_configured": true, 00:17:42.004 "data_offset": 0, 00:17:42.004 "data_size": 65536 00:17:42.004 } 00:17:42.004 ] 00:17:42.004 } 00:17:42.004 } 00:17:42.004 }' 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:42.004 BaseBdev2 00:17:42.004 BaseBdev3' 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.004 "name": "BaseBdev1", 00:17:42.004 "aliases": [ 00:17:42.004 "e9cc64be-c111-4620-8d1f-0c2efd80b7e0" 00:17:42.004 ], 00:17:42.004 "product_name": "Malloc disk", 00:17:42.004 "block_size": 512, 00:17:42.004 "num_blocks": 65536, 00:17:42.004 "uuid": "e9cc64be-c111-4620-8d1f-0c2efd80b7e0", 00:17:42.004 "assigned_rate_limits": { 00:17:42.004 "rw_ios_per_sec": 0, 00:17:42.004 "rw_mbytes_per_sec": 0, 00:17:42.004 "r_mbytes_per_sec": 0, 00:17:42.004 "w_mbytes_per_sec": 0 00:17:42.004 }, 00:17:42.004 "claimed": true, 00:17:42.004 "claim_type": "exclusive_write", 00:17:42.004 "zoned": false, 00:17:42.004 "supported_io_types": { 00:17:42.004 "read": true, 00:17:42.004 "write": true, 00:17:42.004 "unmap": true, 00:17:42.004 "flush": true, 00:17:42.004 "reset": true, 00:17:42.004 "nvme_admin": false, 00:17:42.004 "nvme_io": false, 00:17:42.004 "nvme_io_md": false, 00:17:42.004 "write_zeroes": true, 00:17:42.004 "zcopy": true, 00:17:42.004 "get_zone_info": false, 00:17:42.004 "zone_management": false, 00:17:42.004 "zone_append": false, 00:17:42.004 "compare": false, 00:17:42.004 "compare_and_write": false, 00:17:42.004 "abort": true, 00:17:42.004 "seek_hole": false, 00:17:42.004 "seek_data": false, 00:17:42.004 "copy": true, 00:17:42.004 "nvme_iov_md": false 00:17:42.004 }, 00:17:42.004 "memory_domains": [ 00:17:42.004 { 00:17:42.004 "dma_device_id": "system", 00:17:42.004 "dma_device_type": 1 00:17:42.004 }, 00:17:42.004 { 00:17:42.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.004 "dma_device_type": 2 00:17:42.004 } 00:17:42.004 ], 00:17:42.004 "driver_specific": {} 00:17:42.004 }' 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.004 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.263 "name": "BaseBdev2", 00:17:42.263 "aliases": [ 00:17:42.263 "7d00b050-f3ce-4f2a-bfba-d16c0eca3efa" 00:17:42.263 ], 00:17:42.263 "product_name": "Malloc disk", 00:17:42.263 "block_size": 512, 00:17:42.263 "num_blocks": 65536, 00:17:42.263 "uuid": "7d00b050-f3ce-4f2a-bfba-d16c0eca3efa", 00:17:42.263 "assigned_rate_limits": { 00:17:42.263 "rw_ios_per_sec": 0, 00:17:42.263 "rw_mbytes_per_sec": 0, 00:17:42.263 "r_mbytes_per_sec": 0, 00:17:42.263 "w_mbytes_per_sec": 0 00:17:42.263 }, 00:17:42.263 "claimed": true, 00:17:42.263 "claim_type": "exclusive_write", 00:17:42.263 "zoned": false, 00:17:42.263 "supported_io_types": { 00:17:42.263 "read": true, 00:17:42.263 "write": true, 00:17:42.263 "unmap": true, 00:17:42.263 "flush": true, 00:17:42.263 "reset": true, 00:17:42.263 "nvme_admin": false, 00:17:42.263 "nvme_io": false, 00:17:42.263 "nvme_io_md": false, 00:17:42.263 "write_zeroes": true, 00:17:42.263 "zcopy": true, 00:17:42.263 "get_zone_info": false, 00:17:42.263 "zone_management": false, 00:17:42.263 "zone_append": false, 00:17:42.263 "compare": false, 00:17:42.263 "compare_and_write": false, 00:17:42.263 "abort": true, 00:17:42.263 "seek_hole": false, 00:17:42.263 "seek_data": false, 00:17:42.263 "copy": true, 00:17:42.263 "nvme_iov_md": false 00:17:42.263 }, 00:17:42.263 "memory_domains": [ 00:17:42.263 { 00:17:42.263 "dma_device_id": "system", 00:17:42.263 "dma_device_type": 1 00:17:42.263 }, 00:17:42.263 { 00:17:42.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.263 "dma_device_type": 2 00:17:42.263 } 00:17:42.263 ], 00:17:42.263 "driver_specific": {} 00:17:42.263 }' 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.263 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:42.521 15:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.780 "name": "BaseBdev3", 00:17:42.780 "aliases": [ 00:17:42.780 "10b795ee-1fee-4cf2-ab64-e4486183c4c6" 00:17:42.780 ], 00:17:42.780 "product_name": "Malloc disk", 00:17:42.780 "block_size": 512, 00:17:42.780 "num_blocks": 65536, 00:17:42.780 "uuid": "10b795ee-1fee-4cf2-ab64-e4486183c4c6", 00:17:42.780 "assigned_rate_limits": { 00:17:42.780 "rw_ios_per_sec": 0, 00:17:42.780 "rw_mbytes_per_sec": 0, 00:17:42.780 "r_mbytes_per_sec": 0, 00:17:42.780 "w_mbytes_per_sec": 0 00:17:42.780 }, 00:17:42.780 "claimed": true, 00:17:42.780 "claim_type": "exclusive_write", 00:17:42.780 "zoned": false, 00:17:42.780 "supported_io_types": { 00:17:42.780 "read": true, 00:17:42.780 "write": true, 00:17:42.780 "unmap": true, 00:17:42.780 "flush": true, 00:17:42.780 "reset": true, 00:17:42.780 "nvme_admin": false, 00:17:42.780 "nvme_io": false, 00:17:42.780 "nvme_io_md": false, 00:17:42.780 "write_zeroes": true, 00:17:42.780 "zcopy": true, 00:17:42.780 "get_zone_info": false, 00:17:42.780 "zone_management": false, 00:17:42.780 "zone_append": false, 00:17:42.780 "compare": false, 00:17:42.780 "compare_and_write": false, 00:17:42.780 "abort": true, 00:17:42.780 "seek_hole": false, 00:17:42.780 "seek_data": false, 00:17:42.780 "copy": true, 00:17:42.780 "nvme_iov_md": false 00:17:42.780 }, 00:17:42.780 "memory_domains": [ 00:17:42.780 { 00:17:42.780 "dma_device_id": "system", 00:17:42.780 "dma_device_type": 1 00:17:42.780 }, 00:17:42.780 { 00:17:42.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.780 "dma_device_type": 2 00:17:42.780 } 00:17:42.780 ], 00:17:42.780 "driver_specific": {} 00:17:42.780 }' 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.780 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:42.781 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.781 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.781 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.781 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:43.039 [2024-07-23 15:11:38.378015] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.039 [2024-07-23 15:11:38.378052] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.039 [2024-07-23 15:11:38.378116] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.039 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.299 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.299 "name": "Existed_Raid", 00:17:43.299 "uuid": "8b15fc68-8b35-4b0b-8f52-d2aee8453691", 00:17:43.299 "strip_size_kb": 64, 00:17:43.299 "state": "offline", 00:17:43.299 "raid_level": "concat", 00:17:43.299 "superblock": false, 00:17:43.299 "num_base_bdevs": 3, 00:17:43.299 "num_base_bdevs_discovered": 2, 00:17:43.299 "num_base_bdevs_operational": 2, 00:17:43.299 "base_bdevs_list": [ 00:17:43.299 { 00:17:43.299 "name": null, 00:17:43.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.299 "is_configured": false, 00:17:43.299 "data_offset": 0, 00:17:43.299 "data_size": 65536 00:17:43.299 }, 00:17:43.299 { 00:17:43.299 "name": "BaseBdev2", 00:17:43.299 "uuid": "7d00b050-f3ce-4f2a-bfba-d16c0eca3efa", 00:17:43.299 "is_configured": true, 00:17:43.299 "data_offset": 0, 00:17:43.299 "data_size": 65536 00:17:43.299 }, 00:17:43.299 { 00:17:43.299 "name": "BaseBdev3", 00:17:43.299 "uuid": "10b795ee-1fee-4cf2-ab64-e4486183c4c6", 00:17:43.299 "is_configured": true, 00:17:43.299 "data_offset": 0, 00:17:43.299 "data_size": 65536 00:17:43.299 } 00:17:43.299 ] 00:17:43.299 }' 00:17:43.299 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.299 15:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.558 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:43.558 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:43.558 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:43.558 15:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.125 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:44.125 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:44.125 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:44.125 [2024-07-23 15:11:39.482942] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:44.125 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:44.125 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:44.125 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.125 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:44.385 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:44.385 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:44.385 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:44.644 [2024-07-23 15:11:39.915544] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:44.644 [2024-07-23 15:11:39.915607] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:17:44.644 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:44.644 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:44.644 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.644 15:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:44.904 BaseBdev2 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:44.904 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.164 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:45.449 [ 00:17:45.449 { 00:17:45.449 "name": "BaseBdev2", 00:17:45.449 "aliases": [ 00:17:45.449 "77f3c983-a081-4ebb-9134-c1bd18f8bb91" 00:17:45.449 ], 00:17:45.449 "product_name": "Malloc disk", 00:17:45.449 "block_size": 512, 00:17:45.449 "num_blocks": 65536, 00:17:45.449 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:45.449 "assigned_rate_limits": { 00:17:45.449 "rw_ios_per_sec": 0, 00:17:45.449 "rw_mbytes_per_sec": 0, 00:17:45.449 "r_mbytes_per_sec": 0, 00:17:45.449 "w_mbytes_per_sec": 0 00:17:45.449 }, 00:17:45.449 "claimed": false, 00:17:45.449 "zoned": false, 00:17:45.449 "supported_io_types": { 00:17:45.449 "read": true, 00:17:45.449 "write": true, 00:17:45.449 "unmap": true, 00:17:45.449 "flush": true, 00:17:45.449 "reset": true, 00:17:45.449 "nvme_admin": false, 00:17:45.449 "nvme_io": false, 00:17:45.449 "nvme_io_md": false, 00:17:45.449 "write_zeroes": true, 00:17:45.449 "zcopy": true, 00:17:45.449 "get_zone_info": false, 00:17:45.449 "zone_management": false, 00:17:45.449 "zone_append": false, 00:17:45.449 "compare": false, 00:17:45.449 "compare_and_write": false, 00:17:45.449 "abort": true, 00:17:45.449 "seek_hole": false, 00:17:45.449 "seek_data": false, 00:17:45.449 "copy": true, 00:17:45.449 "nvme_iov_md": false 00:17:45.449 }, 00:17:45.449 "memory_domains": [ 00:17:45.449 { 00:17:45.449 "dma_device_id": "system", 00:17:45.449 "dma_device_type": 1 00:17:45.449 }, 00:17:45.449 { 00:17:45.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.449 "dma_device_type": 2 00:17:45.449 } 00:17:45.449 ], 00:17:45.449 "driver_specific": {} 00:17:45.449 } 00:17:45.449 ] 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.449 BaseBdev3 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:45.449 15:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.736 15:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.995 [ 00:17:45.995 { 00:17:45.995 "name": "BaseBdev3", 00:17:45.995 "aliases": [ 00:17:45.995 "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3" 00:17:45.995 ], 00:17:45.995 "product_name": "Malloc disk", 00:17:45.995 "block_size": 512, 00:17:45.995 "num_blocks": 65536, 00:17:45.995 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:45.995 "assigned_rate_limits": { 00:17:45.995 "rw_ios_per_sec": 0, 00:17:45.995 "rw_mbytes_per_sec": 0, 00:17:45.995 "r_mbytes_per_sec": 0, 00:17:45.995 "w_mbytes_per_sec": 0 00:17:45.995 }, 00:17:45.995 "claimed": false, 00:17:45.995 "zoned": false, 00:17:45.995 "supported_io_types": { 00:17:45.995 "read": true, 00:17:45.995 "write": true, 00:17:45.995 "unmap": true, 00:17:45.995 "flush": true, 00:17:45.995 "reset": true, 00:17:45.995 "nvme_admin": false, 00:17:45.995 "nvme_io": false, 00:17:45.995 "nvme_io_md": false, 00:17:45.995 "write_zeroes": true, 00:17:45.995 "zcopy": true, 00:17:45.995 "get_zone_info": false, 00:17:45.995 "zone_management": false, 00:17:45.995 "zone_append": false, 00:17:45.995 "compare": false, 00:17:45.995 "compare_and_write": false, 00:17:45.995 "abort": true, 00:17:45.995 "seek_hole": false, 00:17:45.995 "seek_data": false, 00:17:45.995 "copy": true, 00:17:45.995 "nvme_iov_md": false 00:17:45.995 }, 00:17:45.995 "memory_domains": [ 00:17:45.995 { 00:17:45.995 "dma_device_id": "system", 00:17:45.995 "dma_device_type": 1 00:17:45.995 }, 00:17:45.995 { 00:17:45.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.995 "dma_device_type": 2 00:17:45.995 } 00:17:45.995 ], 00:17:45.995 "driver_specific": {} 00:17:45.995 } 00:17:45.995 ] 00:17:45.995 15:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:45.995 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:45.995 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:45.995 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:45.995 [2024-07-23 15:11:41.347870] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.996 [2024-07-23 15:11:41.347945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.996 [2024-07-23 15:11:41.347985] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.996 [2024-07-23 15:11:41.350100] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.996 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.254 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.254 "name": "Existed_Raid", 00:17:46.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.254 "strip_size_kb": 64, 00:17:46.254 "state": "configuring", 00:17:46.254 "raid_level": "concat", 00:17:46.254 "superblock": false, 00:17:46.254 "num_base_bdevs": 3, 00:17:46.254 "num_base_bdevs_discovered": 2, 00:17:46.254 "num_base_bdevs_operational": 3, 00:17:46.254 "base_bdevs_list": [ 00:17:46.254 { 00:17:46.254 "name": "BaseBdev1", 00:17:46.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.254 "is_configured": false, 00:17:46.254 "data_offset": 0, 00:17:46.254 "data_size": 0 00:17:46.254 }, 00:17:46.254 { 00:17:46.254 "name": "BaseBdev2", 00:17:46.254 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:46.254 "is_configured": true, 00:17:46.254 "data_offset": 0, 00:17:46.254 "data_size": 65536 00:17:46.254 }, 00:17:46.254 { 00:17:46.254 "name": "BaseBdev3", 00:17:46.254 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:46.254 "is_configured": true, 00:17:46.255 "data_offset": 0, 00:17:46.255 "data_size": 65536 00:17:46.255 } 00:17:46.255 ] 00:17:46.255 }' 00:17:46.255 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.255 15:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.512 15:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:46.772 [2024-07-23 15:11:42.172060] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.772 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.340 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.340 "name": "Existed_Raid", 00:17:47.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.340 "strip_size_kb": 64, 00:17:47.340 "state": "configuring", 00:17:47.340 "raid_level": "concat", 00:17:47.340 "superblock": false, 00:17:47.340 "num_base_bdevs": 3, 00:17:47.340 "num_base_bdevs_discovered": 1, 00:17:47.340 "num_base_bdevs_operational": 3, 00:17:47.340 "base_bdevs_list": [ 00:17:47.340 { 00:17:47.340 "name": "BaseBdev1", 00:17:47.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.340 "is_configured": false, 00:17:47.340 "data_offset": 0, 00:17:47.340 "data_size": 0 00:17:47.340 }, 00:17:47.340 { 00:17:47.340 "name": null, 00:17:47.340 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:47.340 "is_configured": false, 00:17:47.340 "data_offset": 0, 00:17:47.340 "data_size": 65536 00:17:47.341 }, 00:17:47.341 { 00:17:47.341 "name": "BaseBdev3", 00:17:47.341 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:47.341 "is_configured": true, 00:17:47.341 "data_offset": 0, 00:17:47.341 "data_size": 65536 00:17:47.341 } 00:17:47.341 ] 00:17:47.341 }' 00:17:47.341 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.341 15:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.341 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.341 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:47.599 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:47.599 15:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.858 [2024-07-23 15:11:43.067736] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.858 BaseBdev1 00:17:47.858 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:47.858 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:47.858 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:47.858 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:47.858 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:47.858 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:47.858 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.858 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:48.115 [ 00:17:48.115 { 00:17:48.115 "name": "BaseBdev1", 00:17:48.115 "aliases": [ 00:17:48.115 "280c67bd-8276-4e0d-9b21-8ff9b2040577" 00:17:48.115 ], 00:17:48.115 "product_name": "Malloc disk", 00:17:48.115 "block_size": 512, 00:17:48.115 "num_blocks": 65536, 00:17:48.115 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:48.115 "assigned_rate_limits": { 00:17:48.115 "rw_ios_per_sec": 0, 00:17:48.115 "rw_mbytes_per_sec": 0, 00:17:48.115 "r_mbytes_per_sec": 0, 00:17:48.115 "w_mbytes_per_sec": 0 00:17:48.115 }, 00:17:48.115 "claimed": true, 00:17:48.115 "claim_type": "exclusive_write", 00:17:48.115 "zoned": false, 00:17:48.115 "supported_io_types": { 00:17:48.115 "read": true, 00:17:48.115 "write": true, 00:17:48.115 "unmap": true, 00:17:48.115 "flush": true, 00:17:48.115 "reset": true, 00:17:48.115 "nvme_admin": false, 00:17:48.115 "nvme_io": false, 00:17:48.115 "nvme_io_md": false, 00:17:48.115 "write_zeroes": true, 00:17:48.115 "zcopy": true, 00:17:48.115 "get_zone_info": false, 00:17:48.115 "zone_management": false, 00:17:48.115 "zone_append": false, 00:17:48.115 "compare": false, 00:17:48.115 "compare_and_write": false, 00:17:48.115 "abort": true, 00:17:48.115 "seek_hole": false, 00:17:48.115 "seek_data": false, 00:17:48.115 "copy": true, 00:17:48.115 "nvme_iov_md": false 00:17:48.115 }, 00:17:48.115 "memory_domains": [ 00:17:48.115 { 00:17:48.115 "dma_device_id": "system", 00:17:48.115 "dma_device_type": 1 00:17:48.115 }, 00:17:48.115 { 00:17:48.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.115 "dma_device_type": 2 00:17:48.115 } 00:17:48.115 ], 00:17:48.115 "driver_specific": {} 00:17:48.115 } 00:17:48.115 ] 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.115 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.373 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.373 "name": "Existed_Raid", 00:17:48.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.373 "strip_size_kb": 64, 00:17:48.373 "state": "configuring", 00:17:48.373 "raid_level": "concat", 00:17:48.373 "superblock": false, 00:17:48.373 "num_base_bdevs": 3, 00:17:48.373 "num_base_bdevs_discovered": 2, 00:17:48.373 "num_base_bdevs_operational": 3, 00:17:48.373 "base_bdevs_list": [ 00:17:48.373 { 00:17:48.373 "name": "BaseBdev1", 00:17:48.373 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:48.373 "is_configured": true, 00:17:48.373 "data_offset": 0, 00:17:48.373 "data_size": 65536 00:17:48.373 }, 00:17:48.373 { 00:17:48.373 "name": null, 00:17:48.373 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:48.373 "is_configured": false, 00:17:48.373 "data_offset": 0, 00:17:48.373 "data_size": 65536 00:17:48.373 }, 00:17:48.373 { 00:17:48.373 "name": "BaseBdev3", 00:17:48.373 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:48.373 "is_configured": true, 00:17:48.373 "data_offset": 0, 00:17:48.373 "data_size": 65536 00:17:48.373 } 00:17:48.373 ] 00:17:48.373 }' 00:17:48.373 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.373 15:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.631 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:48.631 15:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.889 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:48.889 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:49.147 [2024-07-23 15:11:44.364167] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.147 "name": "Existed_Raid", 00:17:49.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.147 "strip_size_kb": 64, 00:17:49.147 "state": "configuring", 00:17:49.147 "raid_level": "concat", 00:17:49.147 "superblock": false, 00:17:49.147 "num_base_bdevs": 3, 00:17:49.147 "num_base_bdevs_discovered": 1, 00:17:49.147 "num_base_bdevs_operational": 3, 00:17:49.147 "base_bdevs_list": [ 00:17:49.147 { 00:17:49.147 "name": "BaseBdev1", 00:17:49.147 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:49.147 "is_configured": true, 00:17:49.147 "data_offset": 0, 00:17:49.147 "data_size": 65536 00:17:49.147 }, 00:17:49.147 { 00:17:49.147 "name": null, 00:17:49.147 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:49.147 "is_configured": false, 00:17:49.147 "data_offset": 0, 00:17:49.147 "data_size": 65536 00:17:49.147 }, 00:17:49.147 { 00:17:49.147 "name": null, 00:17:49.147 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:49.147 "is_configured": false, 00:17:49.147 "data_offset": 0, 00:17:49.147 "data_size": 65536 00:17:49.147 } 00:17:49.147 ] 00:17:49.147 }' 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.147 15:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.712 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.713 15:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:49.713 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:49.713 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:49.971 [2024-07-23 15:11:45.328427] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.971 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.229 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.229 "name": "Existed_Raid", 00:17:50.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.229 "strip_size_kb": 64, 00:17:50.229 "state": "configuring", 00:17:50.229 "raid_level": "concat", 00:17:50.229 "superblock": false, 00:17:50.229 "num_base_bdevs": 3, 00:17:50.229 "num_base_bdevs_discovered": 2, 00:17:50.229 "num_base_bdevs_operational": 3, 00:17:50.229 "base_bdevs_list": [ 00:17:50.229 { 00:17:50.229 "name": "BaseBdev1", 00:17:50.229 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:50.229 "is_configured": true, 00:17:50.229 "data_offset": 0, 00:17:50.229 "data_size": 65536 00:17:50.229 }, 00:17:50.229 { 00:17:50.229 "name": null, 00:17:50.229 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:50.229 "is_configured": false, 00:17:50.229 "data_offset": 0, 00:17:50.229 "data_size": 65536 00:17:50.229 }, 00:17:50.229 { 00:17:50.229 "name": "BaseBdev3", 00:17:50.229 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:50.229 "is_configured": true, 00:17:50.229 "data_offset": 0, 00:17:50.229 "data_size": 65536 00:17:50.229 } 00:17:50.229 ] 00:17:50.229 }' 00:17:50.229 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.229 15:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.488 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.488 15:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.748 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:50.748 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:51.007 [2024-07-23 15:11:46.216633] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.007 "name": "Existed_Raid", 00:17:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.007 "strip_size_kb": 64, 00:17:51.007 "state": "configuring", 00:17:51.007 "raid_level": "concat", 00:17:51.007 "superblock": false, 00:17:51.007 "num_base_bdevs": 3, 00:17:51.007 "num_base_bdevs_discovered": 1, 00:17:51.007 "num_base_bdevs_operational": 3, 00:17:51.007 "base_bdevs_list": [ 00:17:51.007 { 00:17:51.007 "name": null, 00:17:51.007 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:51.007 "is_configured": false, 00:17:51.007 "data_offset": 0, 00:17:51.007 "data_size": 65536 00:17:51.007 }, 00:17:51.007 { 00:17:51.007 "name": null, 00:17:51.007 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:51.007 "is_configured": false, 00:17:51.007 "data_offset": 0, 00:17:51.007 "data_size": 65536 00:17:51.007 }, 00:17:51.007 { 00:17:51.007 "name": "BaseBdev3", 00:17:51.007 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:51.007 "is_configured": true, 00:17:51.007 "data_offset": 0, 00:17:51.007 "data_size": 65536 00:17:51.007 } 00:17:51.007 ] 00:17:51.007 }' 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.007 15:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.575 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.575 15:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:51.834 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:51.834 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:52.104 [2024-07-23 15:11:47.285395] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.104 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.389 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.389 "name": "Existed_Raid", 00:17:52.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.389 "strip_size_kb": 64, 00:17:52.389 "state": "configuring", 00:17:52.389 "raid_level": "concat", 00:17:52.389 "superblock": false, 00:17:52.389 "num_base_bdevs": 3, 00:17:52.389 "num_base_bdevs_discovered": 2, 00:17:52.389 "num_base_bdevs_operational": 3, 00:17:52.389 "base_bdevs_list": [ 00:17:52.389 { 00:17:52.389 "name": null, 00:17:52.389 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:52.389 "is_configured": false, 00:17:52.389 "data_offset": 0, 00:17:52.389 "data_size": 65536 00:17:52.389 }, 00:17:52.389 { 00:17:52.389 "name": "BaseBdev2", 00:17:52.389 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:52.389 "is_configured": true, 00:17:52.389 "data_offset": 0, 00:17:52.389 "data_size": 65536 00:17:52.389 }, 00:17:52.389 { 00:17:52.389 "name": "BaseBdev3", 00:17:52.389 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:52.389 "is_configured": true, 00:17:52.389 "data_offset": 0, 00:17:52.389 "data_size": 65536 00:17:52.389 } 00:17:52.389 ] 00:17:52.389 }' 00:17:52.389 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.389 15:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.649 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.649 15:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.907 15:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:52.907 15:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.907 15:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:53.166 15:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 280c67bd-8276-4e0d-9b21-8ff9b2040577 00:17:53.426 [2024-07-23 15:11:48.629195] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:53.427 [2024-07-23 15:11:48.629246] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:17:53.427 [2024-07-23 15:11:48.629258] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:53.427 [2024-07-23 15:11:48.629344] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:17:53.427 [2024-07-23 15:11:48.629630] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:17:53.427 [2024-07-23 15:11:48.629642] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007880 00:17:53.427 [2024-07-23 15:11:48.629859] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.427 NewBaseBdev 00:17:53.427 15:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:53.427 15:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:53.427 15:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:53.427 15:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:53.427 15:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:53.427 15:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:53.427 15:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.427 15:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:53.686 [ 00:17:53.686 { 00:17:53.686 "name": "NewBaseBdev", 00:17:53.686 "aliases": [ 00:17:53.686 "280c67bd-8276-4e0d-9b21-8ff9b2040577" 00:17:53.686 ], 00:17:53.686 "product_name": "Malloc disk", 00:17:53.686 "block_size": 512, 00:17:53.686 "num_blocks": 65536, 00:17:53.686 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:53.686 "assigned_rate_limits": { 00:17:53.686 "rw_ios_per_sec": 0, 00:17:53.686 "rw_mbytes_per_sec": 0, 00:17:53.686 "r_mbytes_per_sec": 0, 00:17:53.686 "w_mbytes_per_sec": 0 00:17:53.686 }, 00:17:53.686 "claimed": true, 00:17:53.686 "claim_type": "exclusive_write", 00:17:53.686 "zoned": false, 00:17:53.686 "supported_io_types": { 00:17:53.686 "read": true, 00:17:53.686 "write": true, 00:17:53.686 "unmap": true, 00:17:53.686 "flush": true, 00:17:53.686 "reset": true, 00:17:53.686 "nvme_admin": false, 00:17:53.686 "nvme_io": false, 00:17:53.686 "nvme_io_md": false, 00:17:53.686 "write_zeroes": true, 00:17:53.686 "zcopy": true, 00:17:53.686 "get_zone_info": false, 00:17:53.686 "zone_management": false, 00:17:53.686 "zone_append": false, 00:17:53.686 "compare": false, 00:17:53.686 "compare_and_write": false, 00:17:53.686 "abort": true, 00:17:53.686 "seek_hole": false, 00:17:53.686 "seek_data": false, 00:17:53.686 "copy": true, 00:17:53.686 "nvme_iov_md": false 00:17:53.686 }, 00:17:53.686 "memory_domains": [ 00:17:53.686 { 00:17:53.686 "dma_device_id": "system", 00:17:53.686 "dma_device_type": 1 00:17:53.686 }, 00:17:53.686 { 00:17:53.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.686 "dma_device_type": 2 00:17:53.686 } 00:17:53.686 ], 00:17:53.686 "driver_specific": {} 00:17:53.686 } 00:17:53.686 ] 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.686 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.945 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.945 "name": "Existed_Raid", 00:17:53.945 "uuid": "72b67b80-eac1-40b0-8f4d-2e8789e3b57a", 00:17:53.945 "strip_size_kb": 64, 00:17:53.945 "state": "online", 00:17:53.945 "raid_level": "concat", 00:17:53.945 "superblock": false, 00:17:53.945 "num_base_bdevs": 3, 00:17:53.945 "num_base_bdevs_discovered": 3, 00:17:53.945 "num_base_bdevs_operational": 3, 00:17:53.945 "base_bdevs_list": [ 00:17:53.945 { 00:17:53.945 "name": "NewBaseBdev", 00:17:53.945 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:53.945 "is_configured": true, 00:17:53.945 "data_offset": 0, 00:17:53.945 "data_size": 65536 00:17:53.945 }, 00:17:53.945 { 00:17:53.945 "name": "BaseBdev2", 00:17:53.945 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:53.945 "is_configured": true, 00:17:53.945 "data_offset": 0, 00:17:53.945 "data_size": 65536 00:17:53.945 }, 00:17:53.945 { 00:17:53.945 "name": "BaseBdev3", 00:17:53.945 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:53.945 "is_configured": true, 00:17:53.945 "data_offset": 0, 00:17:53.945 "data_size": 65536 00:17:53.945 } 00:17:53.945 ] 00:17:53.945 }' 00:17:53.945 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.945 15:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.204 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:54.204 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:54.204 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:54.204 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:54.204 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:54.204 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:54.204 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:54.204 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:54.464 [2024-07-23 15:11:49.769835] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.464 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:54.464 "name": "Existed_Raid", 00:17:54.464 "aliases": [ 00:17:54.464 "72b67b80-eac1-40b0-8f4d-2e8789e3b57a" 00:17:54.464 ], 00:17:54.464 "product_name": "Raid Volume", 00:17:54.464 "block_size": 512, 00:17:54.464 "num_blocks": 196608, 00:17:54.464 "uuid": "72b67b80-eac1-40b0-8f4d-2e8789e3b57a", 00:17:54.464 "assigned_rate_limits": { 00:17:54.464 "rw_ios_per_sec": 0, 00:17:54.464 "rw_mbytes_per_sec": 0, 00:17:54.464 "r_mbytes_per_sec": 0, 00:17:54.464 "w_mbytes_per_sec": 0 00:17:54.464 }, 00:17:54.464 "claimed": false, 00:17:54.464 "zoned": false, 00:17:54.464 "supported_io_types": { 00:17:54.464 "read": true, 00:17:54.464 "write": true, 00:17:54.464 "unmap": true, 00:17:54.464 "flush": true, 00:17:54.464 "reset": true, 00:17:54.464 "nvme_admin": false, 00:17:54.464 "nvme_io": false, 00:17:54.464 "nvme_io_md": false, 00:17:54.464 "write_zeroes": true, 00:17:54.464 "zcopy": false, 00:17:54.464 "get_zone_info": false, 00:17:54.464 "zone_management": false, 00:17:54.464 "zone_append": false, 00:17:54.464 "compare": false, 00:17:54.464 "compare_and_write": false, 00:17:54.464 "abort": false, 00:17:54.464 "seek_hole": false, 00:17:54.464 "seek_data": false, 00:17:54.464 "copy": false, 00:17:54.464 "nvme_iov_md": false 00:17:54.464 }, 00:17:54.464 "memory_domains": [ 00:17:54.464 { 00:17:54.464 "dma_device_id": "system", 00:17:54.464 "dma_device_type": 1 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.464 "dma_device_type": 2 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "dma_device_id": "system", 00:17:54.464 "dma_device_type": 1 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.464 "dma_device_type": 2 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "dma_device_id": "system", 00:17:54.464 "dma_device_type": 1 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.464 "dma_device_type": 2 00:17:54.464 } 00:17:54.464 ], 00:17:54.464 "driver_specific": { 00:17:54.464 "raid": { 00:17:54.464 "uuid": "72b67b80-eac1-40b0-8f4d-2e8789e3b57a", 00:17:54.464 "strip_size_kb": 64, 00:17:54.464 "state": "online", 00:17:54.464 "raid_level": "concat", 00:17:54.464 "superblock": false, 00:17:54.464 "num_base_bdevs": 3, 00:17:54.464 "num_base_bdevs_discovered": 3, 00:17:54.464 "num_base_bdevs_operational": 3, 00:17:54.464 "base_bdevs_list": [ 00:17:54.464 { 00:17:54.464 "name": "NewBaseBdev", 00:17:54.464 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:54.464 "is_configured": true, 00:17:54.464 "data_offset": 0, 00:17:54.464 "data_size": 65536 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "name": "BaseBdev2", 00:17:54.464 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:54.464 "is_configured": true, 00:17:54.464 "data_offset": 0, 00:17:54.464 "data_size": 65536 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "name": "BaseBdev3", 00:17:54.464 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:54.464 "is_configured": true, 00:17:54.464 "data_offset": 0, 00:17:54.464 "data_size": 65536 00:17:54.464 } 00:17:54.464 ] 00:17:54.464 } 00:17:54.464 } 00:17:54.464 }' 00:17:54.464 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.464 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:54.464 BaseBdev2 00:17:54.464 BaseBdev3' 00:17:54.464 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.464 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:54.464 15:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.722 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.722 "name": "NewBaseBdev", 00:17:54.722 "aliases": [ 00:17:54.722 "280c67bd-8276-4e0d-9b21-8ff9b2040577" 00:17:54.722 ], 00:17:54.722 "product_name": "Malloc disk", 00:17:54.722 "block_size": 512, 00:17:54.722 "num_blocks": 65536, 00:17:54.722 "uuid": "280c67bd-8276-4e0d-9b21-8ff9b2040577", 00:17:54.722 "assigned_rate_limits": { 00:17:54.722 "rw_ios_per_sec": 0, 00:17:54.722 "rw_mbytes_per_sec": 0, 00:17:54.722 "r_mbytes_per_sec": 0, 00:17:54.722 "w_mbytes_per_sec": 0 00:17:54.722 }, 00:17:54.722 "claimed": true, 00:17:54.722 "claim_type": "exclusive_write", 00:17:54.722 "zoned": false, 00:17:54.722 "supported_io_types": { 00:17:54.722 "read": true, 00:17:54.722 "write": true, 00:17:54.722 "unmap": true, 00:17:54.722 "flush": true, 00:17:54.722 "reset": true, 00:17:54.722 "nvme_admin": false, 00:17:54.722 "nvme_io": false, 00:17:54.722 "nvme_io_md": false, 00:17:54.722 "write_zeroes": true, 00:17:54.722 "zcopy": true, 00:17:54.722 "get_zone_info": false, 00:17:54.722 "zone_management": false, 00:17:54.722 "zone_append": false, 00:17:54.722 "compare": false, 00:17:54.722 "compare_and_write": false, 00:17:54.722 "abort": true, 00:17:54.722 "seek_hole": false, 00:17:54.722 "seek_data": false, 00:17:54.722 "copy": true, 00:17:54.722 "nvme_iov_md": false 00:17:54.722 }, 00:17:54.722 "memory_domains": [ 00:17:54.722 { 00:17:54.722 "dma_device_id": "system", 00:17:54.722 "dma_device_type": 1 00:17:54.722 }, 00:17:54.722 { 00:17:54.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.722 "dma_device_type": 2 00:17:54.722 } 00:17:54.722 ], 00:17:54.722 "driver_specific": {} 00:17:54.722 }' 00:17:54.722 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.722 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.722 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:54.722 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.722 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.722 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:54.722 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.723 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.723 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:54.723 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.723 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.981 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:54.981 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.981 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.981 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:54.981 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.981 "name": "BaseBdev2", 00:17:54.981 "aliases": [ 00:17:54.981 "77f3c983-a081-4ebb-9134-c1bd18f8bb91" 00:17:54.981 ], 00:17:54.981 "product_name": "Malloc disk", 00:17:54.981 "block_size": 512, 00:17:54.981 "num_blocks": 65536, 00:17:54.981 "uuid": "77f3c983-a081-4ebb-9134-c1bd18f8bb91", 00:17:54.981 "assigned_rate_limits": { 00:17:54.981 "rw_ios_per_sec": 0, 00:17:54.981 "rw_mbytes_per_sec": 0, 00:17:54.981 "r_mbytes_per_sec": 0, 00:17:54.981 "w_mbytes_per_sec": 0 00:17:54.981 }, 00:17:54.981 "claimed": true, 00:17:54.981 "claim_type": "exclusive_write", 00:17:54.981 "zoned": false, 00:17:54.981 "supported_io_types": { 00:17:54.981 "read": true, 00:17:54.981 "write": true, 00:17:54.981 "unmap": true, 00:17:54.981 "flush": true, 00:17:54.981 "reset": true, 00:17:54.981 "nvme_admin": false, 00:17:54.981 "nvme_io": false, 00:17:54.981 "nvme_io_md": false, 00:17:54.981 "write_zeroes": true, 00:17:54.981 "zcopy": true, 00:17:54.981 "get_zone_info": false, 00:17:54.981 "zone_management": false, 00:17:54.981 "zone_append": false, 00:17:54.981 "compare": false, 00:17:54.981 "compare_and_write": false, 00:17:54.981 "abort": true, 00:17:54.981 "seek_hole": false, 00:17:54.981 "seek_data": false, 00:17:54.981 "copy": true, 00:17:54.981 "nvme_iov_md": false 00:17:54.981 }, 00:17:54.981 "memory_domains": [ 00:17:54.981 { 00:17:54.981 "dma_device_id": "system", 00:17:54.981 "dma_device_type": 1 00:17:54.981 }, 00:17:54.981 { 00:17:54.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.981 "dma_device_type": 2 00:17:54.981 } 00:17:54.981 ], 00:17:54.981 "driver_specific": {} 00:17:54.981 }' 00:17:54.981 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:55.241 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:55.241 "name": "BaseBdev3", 00:17:55.241 "aliases": [ 00:17:55.241 "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3" 00:17:55.241 ], 00:17:55.241 "product_name": "Malloc disk", 00:17:55.241 "block_size": 512, 00:17:55.241 "num_blocks": 65536, 00:17:55.241 "uuid": "7ebab2a5-2769-4add-9f45-e4b78d4cc0f3", 00:17:55.241 "assigned_rate_limits": { 00:17:55.241 "rw_ios_per_sec": 0, 00:17:55.241 "rw_mbytes_per_sec": 0, 00:17:55.242 "r_mbytes_per_sec": 0, 00:17:55.242 "w_mbytes_per_sec": 0 00:17:55.242 }, 00:17:55.242 "claimed": true, 00:17:55.242 "claim_type": "exclusive_write", 00:17:55.242 "zoned": false, 00:17:55.242 "supported_io_types": { 00:17:55.242 "read": true, 00:17:55.242 "write": true, 00:17:55.242 "unmap": true, 00:17:55.242 "flush": true, 00:17:55.242 "reset": true, 00:17:55.242 "nvme_admin": false, 00:17:55.242 "nvme_io": false, 00:17:55.242 "nvme_io_md": false, 00:17:55.242 "write_zeroes": true, 00:17:55.242 "zcopy": true, 00:17:55.242 "get_zone_info": false, 00:17:55.242 "zone_management": false, 00:17:55.242 "zone_append": false, 00:17:55.242 "compare": false, 00:17:55.242 "compare_and_write": false, 00:17:55.242 "abort": true, 00:17:55.242 "seek_hole": false, 00:17:55.242 "seek_data": false, 00:17:55.242 "copy": true, 00:17:55.242 "nvme_iov_md": false 00:17:55.242 }, 00:17:55.242 "memory_domains": [ 00:17:55.242 { 00:17:55.242 "dma_device_id": "system", 00:17:55.242 "dma_device_type": 1 00:17:55.242 }, 00:17:55.242 { 00:17:55.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.242 "dma_device_type": 2 00:17:55.242 } 00:17:55.242 ], 00:17:55.242 "driver_specific": {} 00:17:55.242 }' 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:55.500 15:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:55.760 [2024-07-23 15:11:50.985837] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.760 [2024-07-23 15:11:50.985877] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.760 [2024-07-23 15:11:50.985961] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.760 [2024-07-23 15:11:50.986018] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.760 [2024-07-23 15:11:50.986034] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name Existed_Raid, state offline 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 93764 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 93764 ']' 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 93764 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93764 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:55.760 killing process with pid 93764 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93764' 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 93764 00:17:55.760 [2024-07-23 15:11:51.040256] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.760 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 93764 00:17:55.760 [2024-07-23 15:11:51.076075] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.018 ************************************ 00:17:56.018 END TEST raid_state_function_test 00:17:56.018 ************************************ 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:56.018 00:17:56.018 real 0m20.998s 00:17:56.018 user 0m36.702s 00:17:56.018 sys 0m4.533s 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.018 15:11:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:56.018 15:11:51 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:56.018 15:11:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:56.018 15:11:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.018 15:11:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.018 ************************************ 00:17:56.018 START TEST raid_state_function_test_sb 00:17:56.018 ************************************ 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:56.018 Process raid pid: 94623 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=94623 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 94623' 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 94623 /var/tmp/spdk-raid.sock 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 94623 ']' 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.018 15:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.018 [2024-07-23 15:11:51.445887] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:17:56.018 [2024-07-23 15:11:51.446071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.277 [2024-07-23 15:11:51.598651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.277 [2024-07-23 15:11:51.645813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.277 [2024-07-23 15:11:51.691409] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:57.215 [2024-07-23 15:11:52.441577] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.215 [2024-07-23 15:11:52.441868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.215 [2024-07-23 15:11:52.441895] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.215 [2024-07-23 15:11:52.441910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.215 [2024-07-23 15:11:52.441924] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.215 [2024-07-23 15:11:52.441937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.215 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.474 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.474 "name": "Existed_Raid", 00:17:57.474 "uuid": "09edfaa8-6ce7-4c23-81f3-c6a794401547", 00:17:57.474 "strip_size_kb": 64, 00:17:57.474 "state": "configuring", 00:17:57.474 "raid_level": "concat", 00:17:57.474 "superblock": true, 00:17:57.474 "num_base_bdevs": 3, 00:17:57.474 "num_base_bdevs_discovered": 0, 00:17:57.474 "num_base_bdevs_operational": 3, 00:17:57.474 "base_bdevs_list": [ 00:17:57.474 { 00:17:57.474 "name": "BaseBdev1", 00:17:57.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.474 "is_configured": false, 00:17:57.474 "data_offset": 0, 00:17:57.474 "data_size": 0 00:17:57.474 }, 00:17:57.474 { 00:17:57.474 "name": "BaseBdev2", 00:17:57.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.474 "is_configured": false, 00:17:57.474 "data_offset": 0, 00:17:57.474 "data_size": 0 00:17:57.474 }, 00:17:57.474 { 00:17:57.474 "name": "BaseBdev3", 00:17:57.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.474 "is_configured": false, 00:17:57.474 "data_offset": 0, 00:17:57.474 "data_size": 0 00:17:57.474 } 00:17:57.474 ] 00:17:57.474 }' 00:17:57.474 15:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.474 15:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.733 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:57.991 [2024-07-23 15:11:53.201586] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.991 [2024-07-23 15:11:53.201849] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:17:57.991 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:58.249 [2024-07-23 15:11:53.453700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:58.249 [2024-07-23 15:11:53.453964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:58.249 [2024-07-23 15:11:53.453987] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:58.249 [2024-07-23 15:11:53.454002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:58.249 [2024-07-23 15:11:53.454010] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:58.249 [2024-07-23 15:11:53.454022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:58.249 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:58.249 [2024-07-23 15:11:53.639513] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.249 BaseBdev1 00:17:58.249 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:58.249 15:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:58.249 15:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:58.249 15:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:58.249 15:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:58.249 15:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:58.249 15:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.507 15:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:58.766 [ 00:17:58.766 { 00:17:58.766 "name": "BaseBdev1", 00:17:58.766 "aliases": [ 00:17:58.766 "cd828755-7e09-4047-b41d-aeffc2b0b463" 00:17:58.766 ], 00:17:58.766 "product_name": "Malloc disk", 00:17:58.766 "block_size": 512, 00:17:58.766 "num_blocks": 65536, 00:17:58.766 "uuid": "cd828755-7e09-4047-b41d-aeffc2b0b463", 00:17:58.766 "assigned_rate_limits": { 00:17:58.766 "rw_ios_per_sec": 0, 00:17:58.766 "rw_mbytes_per_sec": 0, 00:17:58.766 "r_mbytes_per_sec": 0, 00:17:58.766 "w_mbytes_per_sec": 0 00:17:58.766 }, 00:17:58.766 "claimed": true, 00:17:58.766 "claim_type": "exclusive_write", 00:17:58.766 "zoned": false, 00:17:58.766 "supported_io_types": { 00:17:58.766 "read": true, 00:17:58.766 "write": true, 00:17:58.766 "unmap": true, 00:17:58.766 "flush": true, 00:17:58.766 "reset": true, 00:17:58.766 "nvme_admin": false, 00:17:58.766 "nvme_io": false, 00:17:58.766 "nvme_io_md": false, 00:17:58.766 "write_zeroes": true, 00:17:58.766 "zcopy": true, 00:17:58.766 "get_zone_info": false, 00:17:58.766 "zone_management": false, 00:17:58.766 "zone_append": false, 00:17:58.766 "compare": false, 00:17:58.766 "compare_and_write": false, 00:17:58.766 "abort": true, 00:17:58.766 "seek_hole": false, 00:17:58.766 "seek_data": false, 00:17:58.766 "copy": true, 00:17:58.766 "nvme_iov_md": false 00:17:58.766 }, 00:17:58.766 "memory_domains": [ 00:17:58.766 { 00:17:58.766 "dma_device_id": "system", 00:17:58.766 "dma_device_type": 1 00:17:58.766 }, 00:17:58.766 { 00:17:58.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.766 "dma_device_type": 2 00:17:58.766 } 00:17:58.766 ], 00:17:58.766 "driver_specific": {} 00:17:58.766 } 00:17:58.766 ] 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.766 15:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.766 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.766 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.032 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.032 "name": "Existed_Raid", 00:17:59.032 "uuid": "7e58efd2-36a0-4aec-9aaf-588909deb24e", 00:17:59.032 "strip_size_kb": 64, 00:17:59.032 "state": "configuring", 00:17:59.032 "raid_level": "concat", 00:17:59.032 "superblock": true, 00:17:59.032 "num_base_bdevs": 3, 00:17:59.032 "num_base_bdevs_discovered": 1, 00:17:59.032 "num_base_bdevs_operational": 3, 00:17:59.032 "base_bdevs_list": [ 00:17:59.032 { 00:17:59.032 "name": "BaseBdev1", 00:17:59.032 "uuid": "cd828755-7e09-4047-b41d-aeffc2b0b463", 00:17:59.032 "is_configured": true, 00:17:59.032 "data_offset": 2048, 00:17:59.032 "data_size": 63488 00:17:59.032 }, 00:17:59.032 { 00:17:59.032 "name": "BaseBdev2", 00:17:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.032 "is_configured": false, 00:17:59.032 "data_offset": 0, 00:17:59.032 "data_size": 0 00:17:59.032 }, 00:17:59.032 { 00:17:59.032 "name": "BaseBdev3", 00:17:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.032 "is_configured": false, 00:17:59.032 "data_offset": 0, 00:17:59.032 "data_size": 0 00:17:59.032 } 00:17:59.032 ] 00:17:59.032 }' 00:17:59.032 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.032 15:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.332 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:59.332 [2024-07-23 15:11:54.755865] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:59.332 [2024-07-23 15:11:54.756137] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:59.591 [2024-07-23 15:11:54.939971] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.591 [2024-07-23 15:11:54.942306] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.591 [2024-07-23 15:11:54.942465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.591 [2024-07-23 15:11:54.942487] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:59.591 [2024-07-23 15:11:54.942501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.591 15:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.850 15:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.850 "name": "Existed_Raid", 00:17:59.850 "uuid": "3bad9e3b-f8ac-4eb4-94a2-ae028ae88ddf", 00:17:59.850 "strip_size_kb": 64, 00:17:59.850 "state": "configuring", 00:17:59.850 "raid_level": "concat", 00:17:59.850 "superblock": true, 00:17:59.850 "num_base_bdevs": 3, 00:17:59.850 "num_base_bdevs_discovered": 1, 00:17:59.850 "num_base_bdevs_operational": 3, 00:17:59.850 "base_bdevs_list": [ 00:17:59.850 { 00:17:59.850 "name": "BaseBdev1", 00:17:59.850 "uuid": "cd828755-7e09-4047-b41d-aeffc2b0b463", 00:17:59.850 "is_configured": true, 00:17:59.850 "data_offset": 2048, 00:17:59.850 "data_size": 63488 00:17:59.850 }, 00:17:59.850 { 00:17:59.850 "name": "BaseBdev2", 00:17:59.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.850 "is_configured": false, 00:17:59.850 "data_offset": 0, 00:17:59.850 "data_size": 0 00:17:59.850 }, 00:17:59.850 { 00:17:59.850 "name": "BaseBdev3", 00:17:59.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.850 "is_configured": false, 00:17:59.850 "data_offset": 0, 00:17:59.850 "data_size": 0 00:17:59.850 } 00:17:59.850 ] 00:17:59.850 }' 00:17:59.850 15:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.850 15:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.419 15:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:00.419 [2024-07-23 15:11:55.742488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.419 BaseBdev2 00:18:00.419 15:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:00.419 15:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:00.419 15:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:00.419 15:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:00.419 15:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:00.419 15:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:00.419 15:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:00.679 15:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:00.679 [ 00:18:00.679 { 00:18:00.679 "name": "BaseBdev2", 00:18:00.679 "aliases": [ 00:18:00.679 "ac5e927b-0f67-4cbf-b21b-14a76370f204" 00:18:00.679 ], 00:18:00.679 "product_name": "Malloc disk", 00:18:00.679 "block_size": 512, 00:18:00.679 "num_blocks": 65536, 00:18:00.679 "uuid": "ac5e927b-0f67-4cbf-b21b-14a76370f204", 00:18:00.679 "assigned_rate_limits": { 00:18:00.679 "rw_ios_per_sec": 0, 00:18:00.679 "rw_mbytes_per_sec": 0, 00:18:00.679 "r_mbytes_per_sec": 0, 00:18:00.679 "w_mbytes_per_sec": 0 00:18:00.679 }, 00:18:00.679 "claimed": true, 00:18:00.679 "claim_type": "exclusive_write", 00:18:00.679 "zoned": false, 00:18:00.679 "supported_io_types": { 00:18:00.679 "read": true, 00:18:00.679 "write": true, 00:18:00.679 "unmap": true, 00:18:00.679 "flush": true, 00:18:00.679 "reset": true, 00:18:00.679 "nvme_admin": false, 00:18:00.679 "nvme_io": false, 00:18:00.679 "nvme_io_md": false, 00:18:00.679 "write_zeroes": true, 00:18:00.679 "zcopy": true, 00:18:00.679 "get_zone_info": false, 00:18:00.679 "zone_management": false, 00:18:00.679 "zone_append": false, 00:18:00.679 "compare": false, 00:18:00.679 "compare_and_write": false, 00:18:00.680 "abort": true, 00:18:00.680 "seek_hole": false, 00:18:00.680 "seek_data": false, 00:18:00.680 "copy": true, 00:18:00.680 "nvme_iov_md": false 00:18:00.680 }, 00:18:00.680 "memory_domains": [ 00:18:00.680 { 00:18:00.680 "dma_device_id": "system", 00:18:00.680 "dma_device_type": 1 00:18:00.680 }, 00:18:00.680 { 00:18:00.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.680 "dma_device_type": 2 00:18:00.680 } 00:18:00.680 ], 00:18:00.680 "driver_specific": {} 00:18:00.680 } 00:18:00.680 ] 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.938 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.938 "name": "Existed_Raid", 00:18:00.938 "uuid": "3bad9e3b-f8ac-4eb4-94a2-ae028ae88ddf", 00:18:00.938 "strip_size_kb": 64, 00:18:00.938 "state": "configuring", 00:18:00.938 "raid_level": "concat", 00:18:00.938 "superblock": true, 00:18:00.938 "num_base_bdevs": 3, 00:18:00.938 "num_base_bdevs_discovered": 2, 00:18:00.938 "num_base_bdevs_operational": 3, 00:18:00.938 "base_bdevs_list": [ 00:18:00.938 { 00:18:00.938 "name": "BaseBdev1", 00:18:00.938 "uuid": "cd828755-7e09-4047-b41d-aeffc2b0b463", 00:18:00.938 "is_configured": true, 00:18:00.939 "data_offset": 2048, 00:18:00.939 "data_size": 63488 00:18:00.939 }, 00:18:00.939 { 00:18:00.939 "name": "BaseBdev2", 00:18:00.939 "uuid": "ac5e927b-0f67-4cbf-b21b-14a76370f204", 00:18:00.939 "is_configured": true, 00:18:00.939 "data_offset": 2048, 00:18:00.939 "data_size": 63488 00:18:00.939 }, 00:18:00.939 { 00:18:00.939 "name": "BaseBdev3", 00:18:00.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.939 "is_configured": false, 00:18:00.939 "data_offset": 0, 00:18:00.939 "data_size": 0 00:18:00.939 } 00:18:00.939 ] 00:18:00.939 }' 00:18:00.939 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.939 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.197 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:01.455 [2024-07-23 15:11:56.738366] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.455 [2024-07-23 15:11:56.738578] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:18:01.455 [2024-07-23 15:11:56.738598] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:01.455 [2024-07-23 15:11:56.738715] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:18:01.455 [2024-07-23 15:11:56.739066] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:18:01.455 [2024-07-23 15:11:56.739081] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:18:01.455 BaseBdev3 00:18:01.455 [2024-07-23 15:11:56.739197] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.455 15:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:01.455 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:01.455 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:01.455 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:01.455 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:01.455 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:01.455 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:01.714 15:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:01.714 [ 00:18:01.714 { 00:18:01.714 "name": "BaseBdev3", 00:18:01.714 "aliases": [ 00:18:01.714 "fab3a0fc-9202-4a57-b5d3-f85fad77de8d" 00:18:01.714 ], 00:18:01.714 "product_name": "Malloc disk", 00:18:01.714 "block_size": 512, 00:18:01.714 "num_blocks": 65536, 00:18:01.714 "uuid": "fab3a0fc-9202-4a57-b5d3-f85fad77de8d", 00:18:01.714 "assigned_rate_limits": { 00:18:01.714 "rw_ios_per_sec": 0, 00:18:01.714 "rw_mbytes_per_sec": 0, 00:18:01.714 "r_mbytes_per_sec": 0, 00:18:01.714 "w_mbytes_per_sec": 0 00:18:01.714 }, 00:18:01.714 "claimed": true, 00:18:01.714 "claim_type": "exclusive_write", 00:18:01.714 "zoned": false, 00:18:01.714 "supported_io_types": { 00:18:01.714 "read": true, 00:18:01.714 "write": true, 00:18:01.714 "unmap": true, 00:18:01.714 "flush": true, 00:18:01.714 "reset": true, 00:18:01.714 "nvme_admin": false, 00:18:01.714 "nvme_io": false, 00:18:01.714 "nvme_io_md": false, 00:18:01.714 "write_zeroes": true, 00:18:01.714 "zcopy": true, 00:18:01.714 "get_zone_info": false, 00:18:01.714 "zone_management": false, 00:18:01.714 "zone_append": false, 00:18:01.714 "compare": false, 00:18:01.714 "compare_and_write": false, 00:18:01.714 "abort": true, 00:18:01.714 "seek_hole": false, 00:18:01.714 "seek_data": false, 00:18:01.714 "copy": true, 00:18:01.714 "nvme_iov_md": false 00:18:01.714 }, 00:18:01.714 "memory_domains": [ 00:18:01.714 { 00:18:01.714 "dma_device_id": "system", 00:18:01.714 "dma_device_type": 1 00:18:01.714 }, 00:18:01.714 { 00:18:01.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.714 "dma_device_type": 2 00:18:01.714 } 00:18:01.714 ], 00:18:01.714 "driver_specific": {} 00:18:01.714 } 00:18:01.714 ] 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.714 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.972 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.972 "name": "Existed_Raid", 00:18:01.973 "uuid": "3bad9e3b-f8ac-4eb4-94a2-ae028ae88ddf", 00:18:01.973 "strip_size_kb": 64, 00:18:01.973 "state": "online", 00:18:01.973 "raid_level": "concat", 00:18:01.973 "superblock": true, 00:18:01.973 "num_base_bdevs": 3, 00:18:01.973 "num_base_bdevs_discovered": 3, 00:18:01.973 "num_base_bdevs_operational": 3, 00:18:01.973 "base_bdevs_list": [ 00:18:01.973 { 00:18:01.973 "name": "BaseBdev1", 00:18:01.973 "uuid": "cd828755-7e09-4047-b41d-aeffc2b0b463", 00:18:01.973 "is_configured": true, 00:18:01.973 "data_offset": 2048, 00:18:01.973 "data_size": 63488 00:18:01.973 }, 00:18:01.973 { 00:18:01.973 "name": "BaseBdev2", 00:18:01.973 "uuid": "ac5e927b-0f67-4cbf-b21b-14a76370f204", 00:18:01.973 "is_configured": true, 00:18:01.973 "data_offset": 2048, 00:18:01.973 "data_size": 63488 00:18:01.973 }, 00:18:01.973 { 00:18:01.973 "name": "BaseBdev3", 00:18:01.973 "uuid": "fab3a0fc-9202-4a57-b5d3-f85fad77de8d", 00:18:01.973 "is_configured": true, 00:18:01.973 "data_offset": 2048, 00:18:01.973 "data_size": 63488 00:18:01.973 } 00:18:01.973 ] 00:18:01.973 }' 00:18:01.973 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.973 15:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.231 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:02.231 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:02.231 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:02.231 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:02.231 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:02.231 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:02.231 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:02.231 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:02.490 [2024-07-23 15:11:57.823007] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.490 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:02.490 "name": "Existed_Raid", 00:18:02.490 "aliases": [ 00:18:02.490 "3bad9e3b-f8ac-4eb4-94a2-ae028ae88ddf" 00:18:02.490 ], 00:18:02.490 "product_name": "Raid Volume", 00:18:02.490 "block_size": 512, 00:18:02.490 "num_blocks": 190464, 00:18:02.490 "uuid": "3bad9e3b-f8ac-4eb4-94a2-ae028ae88ddf", 00:18:02.490 "assigned_rate_limits": { 00:18:02.490 "rw_ios_per_sec": 0, 00:18:02.490 "rw_mbytes_per_sec": 0, 00:18:02.490 "r_mbytes_per_sec": 0, 00:18:02.490 "w_mbytes_per_sec": 0 00:18:02.490 }, 00:18:02.490 "claimed": false, 00:18:02.490 "zoned": false, 00:18:02.490 "supported_io_types": { 00:18:02.490 "read": true, 00:18:02.490 "write": true, 00:18:02.490 "unmap": true, 00:18:02.490 "flush": true, 00:18:02.490 "reset": true, 00:18:02.490 "nvme_admin": false, 00:18:02.490 "nvme_io": false, 00:18:02.490 "nvme_io_md": false, 00:18:02.490 "write_zeroes": true, 00:18:02.490 "zcopy": false, 00:18:02.490 "get_zone_info": false, 00:18:02.490 "zone_management": false, 00:18:02.490 "zone_append": false, 00:18:02.490 "compare": false, 00:18:02.490 "compare_and_write": false, 00:18:02.490 "abort": false, 00:18:02.490 "seek_hole": false, 00:18:02.490 "seek_data": false, 00:18:02.490 "copy": false, 00:18:02.490 "nvme_iov_md": false 00:18:02.490 }, 00:18:02.490 "memory_domains": [ 00:18:02.490 { 00:18:02.490 "dma_device_id": "system", 00:18:02.490 "dma_device_type": 1 00:18:02.490 }, 00:18:02.490 { 00:18:02.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.490 "dma_device_type": 2 00:18:02.490 }, 00:18:02.490 { 00:18:02.490 "dma_device_id": "system", 00:18:02.490 "dma_device_type": 1 00:18:02.490 }, 00:18:02.490 { 00:18:02.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.490 "dma_device_type": 2 00:18:02.490 }, 00:18:02.490 { 00:18:02.490 "dma_device_id": "system", 00:18:02.490 "dma_device_type": 1 00:18:02.490 }, 00:18:02.490 { 00:18:02.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.490 "dma_device_type": 2 00:18:02.490 } 00:18:02.490 ], 00:18:02.490 "driver_specific": { 00:18:02.490 "raid": { 00:18:02.490 "uuid": "3bad9e3b-f8ac-4eb4-94a2-ae028ae88ddf", 00:18:02.490 "strip_size_kb": 64, 00:18:02.490 "state": "online", 00:18:02.490 "raid_level": "concat", 00:18:02.490 "superblock": true, 00:18:02.490 "num_base_bdevs": 3, 00:18:02.490 "num_base_bdevs_discovered": 3, 00:18:02.490 "num_base_bdevs_operational": 3, 00:18:02.490 "base_bdevs_list": [ 00:18:02.490 { 00:18:02.490 "name": "BaseBdev1", 00:18:02.490 "uuid": "cd828755-7e09-4047-b41d-aeffc2b0b463", 00:18:02.490 "is_configured": true, 00:18:02.490 "data_offset": 2048, 00:18:02.490 "data_size": 63488 00:18:02.490 }, 00:18:02.490 { 00:18:02.490 "name": "BaseBdev2", 00:18:02.490 "uuid": "ac5e927b-0f67-4cbf-b21b-14a76370f204", 00:18:02.490 "is_configured": true, 00:18:02.490 "data_offset": 2048, 00:18:02.490 "data_size": 63488 00:18:02.490 }, 00:18:02.490 { 00:18:02.490 "name": "BaseBdev3", 00:18:02.490 "uuid": "fab3a0fc-9202-4a57-b5d3-f85fad77de8d", 00:18:02.490 "is_configured": true, 00:18:02.490 "data_offset": 2048, 00:18:02.491 "data_size": 63488 00:18:02.491 } 00:18:02.491 ] 00:18:02.491 } 00:18:02.491 } 00:18:02.491 }' 00:18:02.491 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.491 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:02.491 BaseBdev2 00:18:02.491 BaseBdev3' 00:18:02.491 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:02.491 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:02.491 15:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.750 "name": "BaseBdev1", 00:18:02.750 "aliases": [ 00:18:02.750 "cd828755-7e09-4047-b41d-aeffc2b0b463" 00:18:02.750 ], 00:18:02.750 "product_name": "Malloc disk", 00:18:02.750 "block_size": 512, 00:18:02.750 "num_blocks": 65536, 00:18:02.750 "uuid": "cd828755-7e09-4047-b41d-aeffc2b0b463", 00:18:02.750 "assigned_rate_limits": { 00:18:02.750 "rw_ios_per_sec": 0, 00:18:02.750 "rw_mbytes_per_sec": 0, 00:18:02.750 "r_mbytes_per_sec": 0, 00:18:02.750 "w_mbytes_per_sec": 0 00:18:02.750 }, 00:18:02.750 "claimed": true, 00:18:02.750 "claim_type": "exclusive_write", 00:18:02.750 "zoned": false, 00:18:02.750 "supported_io_types": { 00:18:02.750 "read": true, 00:18:02.750 "write": true, 00:18:02.750 "unmap": true, 00:18:02.750 "flush": true, 00:18:02.750 "reset": true, 00:18:02.750 "nvme_admin": false, 00:18:02.750 "nvme_io": false, 00:18:02.750 "nvme_io_md": false, 00:18:02.750 "write_zeroes": true, 00:18:02.750 "zcopy": true, 00:18:02.750 "get_zone_info": false, 00:18:02.750 "zone_management": false, 00:18:02.750 "zone_append": false, 00:18:02.750 "compare": false, 00:18:02.750 "compare_and_write": false, 00:18:02.750 "abort": true, 00:18:02.750 "seek_hole": false, 00:18:02.750 "seek_data": false, 00:18:02.750 "copy": true, 00:18:02.750 "nvme_iov_md": false 00:18:02.750 }, 00:18:02.750 "memory_domains": [ 00:18:02.750 { 00:18:02.750 "dma_device_id": "system", 00:18:02.750 "dma_device_type": 1 00:18:02.750 }, 00:18:02.750 { 00:18:02.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.750 "dma_device_type": 2 00:18:02.750 } 00:18:02.750 ], 00:18:02.750 "driver_specific": {} 00:18:02.750 }' 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.750 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:03.010 "name": "BaseBdev2", 00:18:03.010 "aliases": [ 00:18:03.010 "ac5e927b-0f67-4cbf-b21b-14a76370f204" 00:18:03.010 ], 00:18:03.010 "product_name": "Malloc disk", 00:18:03.010 "block_size": 512, 00:18:03.010 "num_blocks": 65536, 00:18:03.010 "uuid": "ac5e927b-0f67-4cbf-b21b-14a76370f204", 00:18:03.010 "assigned_rate_limits": { 00:18:03.010 "rw_ios_per_sec": 0, 00:18:03.010 "rw_mbytes_per_sec": 0, 00:18:03.010 "r_mbytes_per_sec": 0, 00:18:03.010 "w_mbytes_per_sec": 0 00:18:03.010 }, 00:18:03.010 "claimed": true, 00:18:03.010 "claim_type": "exclusive_write", 00:18:03.010 "zoned": false, 00:18:03.010 "supported_io_types": { 00:18:03.010 "read": true, 00:18:03.010 "write": true, 00:18:03.010 "unmap": true, 00:18:03.010 "flush": true, 00:18:03.010 "reset": true, 00:18:03.010 "nvme_admin": false, 00:18:03.010 "nvme_io": false, 00:18:03.010 "nvme_io_md": false, 00:18:03.010 "write_zeroes": true, 00:18:03.010 "zcopy": true, 00:18:03.010 "get_zone_info": false, 00:18:03.010 "zone_management": false, 00:18:03.010 "zone_append": false, 00:18:03.010 "compare": false, 00:18:03.010 "compare_and_write": false, 00:18:03.010 "abort": true, 00:18:03.010 "seek_hole": false, 00:18:03.010 "seek_data": false, 00:18:03.010 "copy": true, 00:18:03.010 "nvme_iov_md": false 00:18:03.010 }, 00:18:03.010 "memory_domains": [ 00:18:03.010 { 00:18:03.010 "dma_device_id": "system", 00:18:03.010 "dma_device_type": 1 00:18:03.010 }, 00:18:03.010 { 00:18:03.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.010 "dma_device_type": 2 00:18:03.010 } 00:18:03.010 ], 00:18:03.010 "driver_specific": {} 00:18:03.010 }' 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.010 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.269 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.269 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.269 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.269 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:03.269 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:03.269 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:03.269 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:03.269 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:03.269 "name": "BaseBdev3", 00:18:03.269 "aliases": [ 00:18:03.269 "fab3a0fc-9202-4a57-b5d3-f85fad77de8d" 00:18:03.269 ], 00:18:03.269 "product_name": "Malloc disk", 00:18:03.269 "block_size": 512, 00:18:03.269 "num_blocks": 65536, 00:18:03.269 "uuid": "fab3a0fc-9202-4a57-b5d3-f85fad77de8d", 00:18:03.269 "assigned_rate_limits": { 00:18:03.269 "rw_ios_per_sec": 0, 00:18:03.269 "rw_mbytes_per_sec": 0, 00:18:03.269 "r_mbytes_per_sec": 0, 00:18:03.269 "w_mbytes_per_sec": 0 00:18:03.269 }, 00:18:03.269 "claimed": true, 00:18:03.269 "claim_type": "exclusive_write", 00:18:03.269 "zoned": false, 00:18:03.269 "supported_io_types": { 00:18:03.269 "read": true, 00:18:03.269 "write": true, 00:18:03.269 "unmap": true, 00:18:03.269 "flush": true, 00:18:03.269 "reset": true, 00:18:03.269 "nvme_admin": false, 00:18:03.269 "nvme_io": false, 00:18:03.269 "nvme_io_md": false, 00:18:03.269 "write_zeroes": true, 00:18:03.269 "zcopy": true, 00:18:03.269 "get_zone_info": false, 00:18:03.269 "zone_management": false, 00:18:03.269 "zone_append": false, 00:18:03.269 "compare": false, 00:18:03.269 "compare_and_write": false, 00:18:03.269 "abort": true, 00:18:03.269 "seek_hole": false, 00:18:03.269 "seek_data": false, 00:18:03.269 "copy": true, 00:18:03.270 "nvme_iov_md": false 00:18:03.270 }, 00:18:03.270 "memory_domains": [ 00:18:03.270 { 00:18:03.270 "dma_device_id": "system", 00:18:03.270 "dma_device_type": 1 00:18:03.270 }, 00:18:03.270 { 00:18:03.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.270 "dma_device_type": 2 00:18:03.270 } 00:18:03.270 ], 00:18:03.270 "driver_specific": {} 00:18:03.270 }' 00:18:03.270 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.270 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.270 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:03.270 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.270 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.270 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:03.270 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:03.529 [2024-07-23 15:11:58.895018] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.529 [2024-07-23 15:11:58.895055] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.529 [2024-07-23 15:11:58.895116] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.529 15:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.788 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.788 "name": "Existed_Raid", 00:18:03.788 "uuid": "3bad9e3b-f8ac-4eb4-94a2-ae028ae88ddf", 00:18:03.788 "strip_size_kb": 64, 00:18:03.788 "state": "offline", 00:18:03.788 "raid_level": "concat", 00:18:03.788 "superblock": true, 00:18:03.788 "num_base_bdevs": 3, 00:18:03.788 "num_base_bdevs_discovered": 2, 00:18:03.788 "num_base_bdevs_operational": 2, 00:18:03.788 "base_bdevs_list": [ 00:18:03.788 { 00:18:03.788 "name": null, 00:18:03.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.788 "is_configured": false, 00:18:03.788 "data_offset": 2048, 00:18:03.788 "data_size": 63488 00:18:03.788 }, 00:18:03.788 { 00:18:03.788 "name": "BaseBdev2", 00:18:03.788 "uuid": "ac5e927b-0f67-4cbf-b21b-14a76370f204", 00:18:03.788 "is_configured": true, 00:18:03.788 "data_offset": 2048, 00:18:03.788 "data_size": 63488 00:18:03.788 }, 00:18:03.788 { 00:18:03.788 "name": "BaseBdev3", 00:18:03.788 "uuid": "fab3a0fc-9202-4a57-b5d3-f85fad77de8d", 00:18:03.788 "is_configured": true, 00:18:03.788 "data_offset": 2048, 00:18:03.788 "data_size": 63488 00:18:03.788 } 00:18:03.788 ] 00:18:03.788 }' 00:18:03.788 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.788 15:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.046 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:04.046 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:04.046 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.046 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:04.305 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:04.305 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.305 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:04.565 [2024-07-23 15:11:59.787901] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:04.565 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:04.565 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:04.565 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.565 15:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:04.823 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:04.823 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.823 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:04.823 [2024-07-23 15:12:00.244646] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:04.823 [2024-07-23 15:12:00.244711] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:18:05.082 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:05.082 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:05.082 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.082 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:05.339 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:05.339 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:05.339 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:05.339 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:05.339 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:05.339 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:05.339 BaseBdev2 00:18:05.339 15:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:05.340 15:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:05.340 15:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:05.340 15:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:05.340 15:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:05.340 15:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:05.340 15:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:05.597 15:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:05.861 [ 00:18:05.861 { 00:18:05.861 "name": "BaseBdev2", 00:18:05.861 "aliases": [ 00:18:05.861 "898a24b9-7d18-4b3c-b4c1-b99cc8af9828" 00:18:05.861 ], 00:18:05.861 "product_name": "Malloc disk", 00:18:05.861 "block_size": 512, 00:18:05.861 "num_blocks": 65536, 00:18:05.861 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:05.861 "assigned_rate_limits": { 00:18:05.861 "rw_ios_per_sec": 0, 00:18:05.861 "rw_mbytes_per_sec": 0, 00:18:05.861 "r_mbytes_per_sec": 0, 00:18:05.861 "w_mbytes_per_sec": 0 00:18:05.861 }, 00:18:05.861 "claimed": false, 00:18:05.861 "zoned": false, 00:18:05.861 "supported_io_types": { 00:18:05.861 "read": true, 00:18:05.861 "write": true, 00:18:05.861 "unmap": true, 00:18:05.861 "flush": true, 00:18:05.861 "reset": true, 00:18:05.861 "nvme_admin": false, 00:18:05.861 "nvme_io": false, 00:18:05.861 "nvme_io_md": false, 00:18:05.861 "write_zeroes": true, 00:18:05.861 "zcopy": true, 00:18:05.861 "get_zone_info": false, 00:18:05.861 "zone_management": false, 00:18:05.861 "zone_append": false, 00:18:05.861 "compare": false, 00:18:05.861 "compare_and_write": false, 00:18:05.861 "abort": true, 00:18:05.861 "seek_hole": false, 00:18:05.861 "seek_data": false, 00:18:05.861 "copy": true, 00:18:05.861 "nvme_iov_md": false 00:18:05.861 }, 00:18:05.861 "memory_domains": [ 00:18:05.861 { 00:18:05.861 "dma_device_id": "system", 00:18:05.861 "dma_device_type": 1 00:18:05.861 }, 00:18:05.861 { 00:18:05.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.861 "dma_device_type": 2 00:18:05.861 } 00:18:05.861 ], 00:18:05.861 "driver_specific": {} 00:18:05.861 } 00:18:05.861 ] 00:18:05.861 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:05.861 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:05.861 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:05.861 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:06.119 BaseBdev3 00:18:06.119 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:06.119 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:06.119 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:06.119 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:06.119 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:06.119 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:06.119 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.119 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:06.377 [ 00:18:06.377 { 00:18:06.377 "name": "BaseBdev3", 00:18:06.377 "aliases": [ 00:18:06.377 "40bd52b4-6b7f-47b1-995a-42d3a808c844" 00:18:06.377 ], 00:18:06.377 "product_name": "Malloc disk", 00:18:06.377 "block_size": 512, 00:18:06.377 "num_blocks": 65536, 00:18:06.377 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:06.377 "assigned_rate_limits": { 00:18:06.377 "rw_ios_per_sec": 0, 00:18:06.377 "rw_mbytes_per_sec": 0, 00:18:06.377 "r_mbytes_per_sec": 0, 00:18:06.377 "w_mbytes_per_sec": 0 00:18:06.377 }, 00:18:06.377 "claimed": false, 00:18:06.378 "zoned": false, 00:18:06.378 "supported_io_types": { 00:18:06.378 "read": true, 00:18:06.378 "write": true, 00:18:06.378 "unmap": true, 00:18:06.378 "flush": true, 00:18:06.378 "reset": true, 00:18:06.378 "nvme_admin": false, 00:18:06.378 "nvme_io": false, 00:18:06.378 "nvme_io_md": false, 00:18:06.378 "write_zeroes": true, 00:18:06.378 "zcopy": true, 00:18:06.378 "get_zone_info": false, 00:18:06.378 "zone_management": false, 00:18:06.378 "zone_append": false, 00:18:06.378 "compare": false, 00:18:06.378 "compare_and_write": false, 00:18:06.378 "abort": true, 00:18:06.378 "seek_hole": false, 00:18:06.378 "seek_data": false, 00:18:06.378 "copy": true, 00:18:06.378 "nvme_iov_md": false 00:18:06.378 }, 00:18:06.378 "memory_domains": [ 00:18:06.378 { 00:18:06.378 "dma_device_id": "system", 00:18:06.378 "dma_device_type": 1 00:18:06.378 }, 00:18:06.378 { 00:18:06.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.378 "dma_device_type": 2 00:18:06.378 } 00:18:06.378 ], 00:18:06.378 "driver_specific": {} 00:18:06.378 } 00:18:06.378 ] 00:18:06.378 15:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:06.378 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:06.378 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:06.378 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:06.636 [2024-07-23 15:12:01.845128] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.636 [2024-07-23 15:12:01.845188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.636 [2024-07-23 15:12:01.845228] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.636 [2024-07-23 15:12:01.847467] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.636 15:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.636 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.636 "name": "Existed_Raid", 00:18:06.636 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:06.636 "strip_size_kb": 64, 00:18:06.636 "state": "configuring", 00:18:06.636 "raid_level": "concat", 00:18:06.636 "superblock": true, 00:18:06.636 "num_base_bdevs": 3, 00:18:06.636 "num_base_bdevs_discovered": 2, 00:18:06.636 "num_base_bdevs_operational": 3, 00:18:06.636 "base_bdevs_list": [ 00:18:06.636 { 00:18:06.636 "name": "BaseBdev1", 00:18:06.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.636 "is_configured": false, 00:18:06.636 "data_offset": 0, 00:18:06.636 "data_size": 0 00:18:06.636 }, 00:18:06.636 { 00:18:06.636 "name": "BaseBdev2", 00:18:06.636 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:06.636 "is_configured": true, 00:18:06.636 "data_offset": 2048, 00:18:06.636 "data_size": 63488 00:18:06.636 }, 00:18:06.636 { 00:18:06.636 "name": "BaseBdev3", 00:18:06.636 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:06.636 "is_configured": true, 00:18:06.636 "data_offset": 2048, 00:18:06.636 "data_size": 63488 00:18:06.636 } 00:18:06.636 ] 00:18:06.636 }' 00:18:06.636 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.636 15:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.896 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:07.155 [2024-07-23 15:12:02.541258] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.155 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.413 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.413 "name": "Existed_Raid", 00:18:07.413 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:07.413 "strip_size_kb": 64, 00:18:07.413 "state": "configuring", 00:18:07.413 "raid_level": "concat", 00:18:07.413 "superblock": true, 00:18:07.413 "num_base_bdevs": 3, 00:18:07.413 "num_base_bdevs_discovered": 1, 00:18:07.413 "num_base_bdevs_operational": 3, 00:18:07.413 "base_bdevs_list": [ 00:18:07.413 { 00:18:07.413 "name": "BaseBdev1", 00:18:07.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.413 "is_configured": false, 00:18:07.413 "data_offset": 0, 00:18:07.413 "data_size": 0 00:18:07.413 }, 00:18:07.413 { 00:18:07.413 "name": null, 00:18:07.413 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:07.413 "is_configured": false, 00:18:07.413 "data_offset": 2048, 00:18:07.413 "data_size": 63488 00:18:07.413 }, 00:18:07.413 { 00:18:07.413 "name": "BaseBdev3", 00:18:07.413 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:07.413 "is_configured": true, 00:18:07.413 "data_offset": 2048, 00:18:07.413 "data_size": 63488 00:18:07.413 } 00:18:07.413 ] 00:18:07.413 }' 00:18:07.413 15:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.413 15:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.671 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.671 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:07.929 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:07.929 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:08.187 [2024-07-23 15:12:03.500966] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.187 BaseBdev1 00:18:08.187 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:08.187 15:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:08.187 15:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:08.187 15:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:08.187 15:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:08.187 15:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:08.187 15:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.445 15:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.703 [ 00:18:08.703 { 00:18:08.703 "name": "BaseBdev1", 00:18:08.703 "aliases": [ 00:18:08.703 "d4e5f92c-ee1f-47c1-8655-25fc3b437812" 00:18:08.703 ], 00:18:08.703 "product_name": "Malloc disk", 00:18:08.703 "block_size": 512, 00:18:08.703 "num_blocks": 65536, 00:18:08.703 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:08.703 "assigned_rate_limits": { 00:18:08.703 "rw_ios_per_sec": 0, 00:18:08.703 "rw_mbytes_per_sec": 0, 00:18:08.703 "r_mbytes_per_sec": 0, 00:18:08.703 "w_mbytes_per_sec": 0 00:18:08.703 }, 00:18:08.703 "claimed": true, 00:18:08.703 "claim_type": "exclusive_write", 00:18:08.703 "zoned": false, 00:18:08.703 "supported_io_types": { 00:18:08.703 "read": true, 00:18:08.703 "write": true, 00:18:08.703 "unmap": true, 00:18:08.703 "flush": true, 00:18:08.703 "reset": true, 00:18:08.703 "nvme_admin": false, 00:18:08.703 "nvme_io": false, 00:18:08.703 "nvme_io_md": false, 00:18:08.703 "write_zeroes": true, 00:18:08.703 "zcopy": true, 00:18:08.703 "get_zone_info": false, 00:18:08.703 "zone_management": false, 00:18:08.703 "zone_append": false, 00:18:08.703 "compare": false, 00:18:08.703 "compare_and_write": false, 00:18:08.703 "abort": true, 00:18:08.703 "seek_hole": false, 00:18:08.703 "seek_data": false, 00:18:08.703 "copy": true, 00:18:08.703 "nvme_iov_md": false 00:18:08.703 }, 00:18:08.703 "memory_domains": [ 00:18:08.703 { 00:18:08.703 "dma_device_id": "system", 00:18:08.703 "dma_device_type": 1 00:18:08.703 }, 00:18:08.703 { 00:18:08.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.703 "dma_device_type": 2 00:18:08.703 } 00:18:08.703 ], 00:18:08.703 "driver_specific": {} 00:18:08.703 } 00:18:08.703 ] 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.703 15:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.703 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.703 "name": "Existed_Raid", 00:18:08.703 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:08.703 "strip_size_kb": 64, 00:18:08.703 "state": "configuring", 00:18:08.703 "raid_level": "concat", 00:18:08.703 "superblock": true, 00:18:08.703 "num_base_bdevs": 3, 00:18:08.703 "num_base_bdevs_discovered": 2, 00:18:08.703 "num_base_bdevs_operational": 3, 00:18:08.703 "base_bdevs_list": [ 00:18:08.703 { 00:18:08.703 "name": "BaseBdev1", 00:18:08.703 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:08.703 "is_configured": true, 00:18:08.703 "data_offset": 2048, 00:18:08.703 "data_size": 63488 00:18:08.703 }, 00:18:08.703 { 00:18:08.703 "name": null, 00:18:08.703 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:08.703 "is_configured": false, 00:18:08.703 "data_offset": 2048, 00:18:08.703 "data_size": 63488 00:18:08.703 }, 00:18:08.703 { 00:18:08.703 "name": "BaseBdev3", 00:18:08.703 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:08.703 "is_configured": true, 00:18:08.703 "data_offset": 2048, 00:18:08.703 "data_size": 63488 00:18:08.703 } 00:18:08.703 ] 00:18:08.703 }' 00:18:08.703 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.703 15:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.962 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.962 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:09.221 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:09.221 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:09.480 [2024-07-23 15:12:04.881400] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.480 15:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.738 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.738 "name": "Existed_Raid", 00:18:09.738 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:09.738 "strip_size_kb": 64, 00:18:09.739 "state": "configuring", 00:18:09.739 "raid_level": "concat", 00:18:09.739 "superblock": true, 00:18:09.739 "num_base_bdevs": 3, 00:18:09.739 "num_base_bdevs_discovered": 1, 00:18:09.739 "num_base_bdevs_operational": 3, 00:18:09.739 "base_bdevs_list": [ 00:18:09.739 { 00:18:09.739 "name": "BaseBdev1", 00:18:09.739 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:09.739 "is_configured": true, 00:18:09.739 "data_offset": 2048, 00:18:09.739 "data_size": 63488 00:18:09.739 }, 00:18:09.739 { 00:18:09.739 "name": null, 00:18:09.739 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:09.739 "is_configured": false, 00:18:09.739 "data_offset": 2048, 00:18:09.739 "data_size": 63488 00:18:09.739 }, 00:18:09.739 { 00:18:09.739 "name": null, 00:18:09.739 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:09.739 "is_configured": false, 00:18:09.739 "data_offset": 2048, 00:18:09.739 "data_size": 63488 00:18:09.739 } 00:18:09.739 ] 00:18:09.739 }' 00:18:09.739 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.739 15:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.997 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.997 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:10.256 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:10.256 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:10.515 [2024-07-23 15:12:05.741596] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.515 "name": "Existed_Raid", 00:18:10.515 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:10.515 "strip_size_kb": 64, 00:18:10.515 "state": "configuring", 00:18:10.515 "raid_level": "concat", 00:18:10.515 "superblock": true, 00:18:10.515 "num_base_bdevs": 3, 00:18:10.515 "num_base_bdevs_discovered": 2, 00:18:10.515 "num_base_bdevs_operational": 3, 00:18:10.515 "base_bdevs_list": [ 00:18:10.515 { 00:18:10.515 "name": "BaseBdev1", 00:18:10.515 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:10.515 "is_configured": true, 00:18:10.515 "data_offset": 2048, 00:18:10.515 "data_size": 63488 00:18:10.515 }, 00:18:10.515 { 00:18:10.515 "name": null, 00:18:10.515 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:10.515 "is_configured": false, 00:18:10.515 "data_offset": 2048, 00:18:10.515 "data_size": 63488 00:18:10.515 }, 00:18:10.515 { 00:18:10.515 "name": "BaseBdev3", 00:18:10.515 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:10.515 "is_configured": true, 00:18:10.515 "data_offset": 2048, 00:18:10.515 "data_size": 63488 00:18:10.515 } 00:18:10.515 ] 00:18:10.515 }' 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.515 15:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.082 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.082 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:11.082 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:11.082 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:11.341 [2024-07-23 15:12:06.633854] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.341 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.600 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.600 "name": "Existed_Raid", 00:18:11.600 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:11.600 "strip_size_kb": 64, 00:18:11.600 "state": "configuring", 00:18:11.600 "raid_level": "concat", 00:18:11.600 "superblock": true, 00:18:11.600 "num_base_bdevs": 3, 00:18:11.600 "num_base_bdevs_discovered": 1, 00:18:11.600 "num_base_bdevs_operational": 3, 00:18:11.600 "base_bdevs_list": [ 00:18:11.600 { 00:18:11.600 "name": null, 00:18:11.600 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:11.600 "is_configured": false, 00:18:11.600 "data_offset": 2048, 00:18:11.600 "data_size": 63488 00:18:11.600 }, 00:18:11.600 { 00:18:11.600 "name": null, 00:18:11.600 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:11.600 "is_configured": false, 00:18:11.600 "data_offset": 2048, 00:18:11.600 "data_size": 63488 00:18:11.600 }, 00:18:11.600 { 00:18:11.600 "name": "BaseBdev3", 00:18:11.600 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:11.600 "is_configured": true, 00:18:11.600 "data_offset": 2048, 00:18:11.600 "data_size": 63488 00:18:11.600 } 00:18:11.600 ] 00:18:11.600 }' 00:18:11.600 15:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.600 15:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.859 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.859 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:12.119 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:12.119 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:12.399 [2024-07-23 15:12:07.706745] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.399 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.658 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.658 "name": "Existed_Raid", 00:18:12.658 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:12.658 "strip_size_kb": 64, 00:18:12.658 "state": "configuring", 00:18:12.658 "raid_level": "concat", 00:18:12.658 "superblock": true, 00:18:12.658 "num_base_bdevs": 3, 00:18:12.658 "num_base_bdevs_discovered": 2, 00:18:12.658 "num_base_bdevs_operational": 3, 00:18:12.658 "base_bdevs_list": [ 00:18:12.658 { 00:18:12.658 "name": null, 00:18:12.658 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:12.658 "is_configured": false, 00:18:12.658 "data_offset": 2048, 00:18:12.658 "data_size": 63488 00:18:12.658 }, 00:18:12.658 { 00:18:12.658 "name": "BaseBdev2", 00:18:12.658 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:12.658 "is_configured": true, 00:18:12.658 "data_offset": 2048, 00:18:12.658 "data_size": 63488 00:18:12.658 }, 00:18:12.658 { 00:18:12.658 "name": "BaseBdev3", 00:18:12.658 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:12.658 "is_configured": true, 00:18:12.658 "data_offset": 2048, 00:18:12.658 "data_size": 63488 00:18:12.658 } 00:18:12.658 ] 00:18:12.658 }' 00:18:12.658 15:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.658 15:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.917 15:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:12.917 15:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.176 15:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:13.176 15:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:13.176 15:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.435 15:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d4e5f92c-ee1f-47c1-8655-25fc3b437812 00:18:13.435 [2024-07-23 15:12:08.842510] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:13.435 NewBaseBdev 00:18:13.435 [2024-07-23 15:12:08.842907] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:18:13.435 [2024-07-23 15:12:08.842934] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:13.435 [2024-07-23 15:12:08.843025] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:18:13.435 [2024-07-23 15:12:08.843321] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:18:13.435 [2024-07-23 15:12:08.843333] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007880 00:18:13.435 [2024-07-23 15:12:08.843430] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.435 15:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:13.435 15:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:13.435 15:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:13.435 15:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:13.435 15:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:13.435 15:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:13.435 15:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.694 15:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:13.953 [ 00:18:13.953 { 00:18:13.953 "name": "NewBaseBdev", 00:18:13.953 "aliases": [ 00:18:13.953 "d4e5f92c-ee1f-47c1-8655-25fc3b437812" 00:18:13.953 ], 00:18:13.953 "product_name": "Malloc disk", 00:18:13.953 "block_size": 512, 00:18:13.953 "num_blocks": 65536, 00:18:13.953 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:13.953 "assigned_rate_limits": { 00:18:13.953 "rw_ios_per_sec": 0, 00:18:13.953 "rw_mbytes_per_sec": 0, 00:18:13.953 "r_mbytes_per_sec": 0, 00:18:13.953 "w_mbytes_per_sec": 0 00:18:13.953 }, 00:18:13.953 "claimed": true, 00:18:13.953 "claim_type": "exclusive_write", 00:18:13.953 "zoned": false, 00:18:13.953 "supported_io_types": { 00:18:13.953 "read": true, 00:18:13.953 "write": true, 00:18:13.953 "unmap": true, 00:18:13.953 "flush": true, 00:18:13.953 "reset": true, 00:18:13.953 "nvme_admin": false, 00:18:13.953 "nvme_io": false, 00:18:13.953 "nvme_io_md": false, 00:18:13.953 "write_zeroes": true, 00:18:13.953 "zcopy": true, 00:18:13.953 "get_zone_info": false, 00:18:13.953 "zone_management": false, 00:18:13.953 "zone_append": false, 00:18:13.953 "compare": false, 00:18:13.953 "compare_and_write": false, 00:18:13.953 "abort": true, 00:18:13.953 "seek_hole": false, 00:18:13.953 "seek_data": false, 00:18:13.953 "copy": true, 00:18:13.953 "nvme_iov_md": false 00:18:13.953 }, 00:18:13.953 "memory_domains": [ 00:18:13.953 { 00:18:13.953 "dma_device_id": "system", 00:18:13.953 "dma_device_type": 1 00:18:13.953 }, 00:18:13.953 { 00:18:13.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.953 "dma_device_type": 2 00:18:13.953 } 00:18:13.953 ], 00:18:13.953 "driver_specific": {} 00:18:13.953 } 00:18:13.953 ] 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.953 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.212 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.212 "name": "Existed_Raid", 00:18:14.212 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:14.212 "strip_size_kb": 64, 00:18:14.212 "state": "online", 00:18:14.212 "raid_level": "concat", 00:18:14.212 "superblock": true, 00:18:14.212 "num_base_bdevs": 3, 00:18:14.212 "num_base_bdevs_discovered": 3, 00:18:14.212 "num_base_bdevs_operational": 3, 00:18:14.212 "base_bdevs_list": [ 00:18:14.212 { 00:18:14.212 "name": "NewBaseBdev", 00:18:14.212 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:14.212 "is_configured": true, 00:18:14.212 "data_offset": 2048, 00:18:14.212 "data_size": 63488 00:18:14.212 }, 00:18:14.212 { 00:18:14.212 "name": "BaseBdev2", 00:18:14.212 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:14.212 "is_configured": true, 00:18:14.212 "data_offset": 2048, 00:18:14.212 "data_size": 63488 00:18:14.212 }, 00:18:14.212 { 00:18:14.212 "name": "BaseBdev3", 00:18:14.212 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:14.212 "is_configured": true, 00:18:14.212 "data_offset": 2048, 00:18:14.212 "data_size": 63488 00:18:14.212 } 00:18:14.212 ] 00:18:14.212 }' 00:18:14.212 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.212 15:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.472 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.472 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:14.472 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:14.472 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:14.472 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:14.472 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:14.472 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:14.472 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:14.731 [2024-07-23 15:12:09.983184] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.731 15:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:14.731 "name": "Existed_Raid", 00:18:14.731 "aliases": [ 00:18:14.731 "62afb5b3-ff1c-43c0-82d9-28cdaee96732" 00:18:14.731 ], 00:18:14.731 "product_name": "Raid Volume", 00:18:14.731 "block_size": 512, 00:18:14.731 "num_blocks": 190464, 00:18:14.731 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:14.731 "assigned_rate_limits": { 00:18:14.731 "rw_ios_per_sec": 0, 00:18:14.731 "rw_mbytes_per_sec": 0, 00:18:14.731 "r_mbytes_per_sec": 0, 00:18:14.731 "w_mbytes_per_sec": 0 00:18:14.731 }, 00:18:14.731 "claimed": false, 00:18:14.731 "zoned": false, 00:18:14.731 "supported_io_types": { 00:18:14.731 "read": true, 00:18:14.731 "write": true, 00:18:14.731 "unmap": true, 00:18:14.731 "flush": true, 00:18:14.731 "reset": true, 00:18:14.731 "nvme_admin": false, 00:18:14.731 "nvme_io": false, 00:18:14.731 "nvme_io_md": false, 00:18:14.731 "write_zeroes": true, 00:18:14.731 "zcopy": false, 00:18:14.731 "get_zone_info": false, 00:18:14.731 "zone_management": false, 00:18:14.731 "zone_append": false, 00:18:14.731 "compare": false, 00:18:14.731 "compare_and_write": false, 00:18:14.731 "abort": false, 00:18:14.731 "seek_hole": false, 00:18:14.731 "seek_data": false, 00:18:14.731 "copy": false, 00:18:14.731 "nvme_iov_md": false 00:18:14.731 }, 00:18:14.731 "memory_domains": [ 00:18:14.731 { 00:18:14.731 "dma_device_id": "system", 00:18:14.731 "dma_device_type": 1 00:18:14.731 }, 00:18:14.731 { 00:18:14.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.731 "dma_device_type": 2 00:18:14.731 }, 00:18:14.731 { 00:18:14.731 "dma_device_id": "system", 00:18:14.731 "dma_device_type": 1 00:18:14.731 }, 00:18:14.731 { 00:18:14.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.731 "dma_device_type": 2 00:18:14.731 }, 00:18:14.731 { 00:18:14.731 "dma_device_id": "system", 00:18:14.731 "dma_device_type": 1 00:18:14.731 }, 00:18:14.731 { 00:18:14.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.731 "dma_device_type": 2 00:18:14.731 } 00:18:14.731 ], 00:18:14.731 "driver_specific": { 00:18:14.731 "raid": { 00:18:14.731 "uuid": "62afb5b3-ff1c-43c0-82d9-28cdaee96732", 00:18:14.731 "strip_size_kb": 64, 00:18:14.731 "state": "online", 00:18:14.731 "raid_level": "concat", 00:18:14.731 "superblock": true, 00:18:14.731 "num_base_bdevs": 3, 00:18:14.731 "num_base_bdevs_discovered": 3, 00:18:14.731 "num_base_bdevs_operational": 3, 00:18:14.731 "base_bdevs_list": [ 00:18:14.731 { 00:18:14.731 "name": "NewBaseBdev", 00:18:14.731 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:14.731 "is_configured": true, 00:18:14.731 "data_offset": 2048, 00:18:14.731 "data_size": 63488 00:18:14.731 }, 00:18:14.731 { 00:18:14.731 "name": "BaseBdev2", 00:18:14.731 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:14.731 "is_configured": true, 00:18:14.731 "data_offset": 2048, 00:18:14.731 "data_size": 63488 00:18:14.731 }, 00:18:14.731 { 00:18:14.731 "name": "BaseBdev3", 00:18:14.731 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:14.731 "is_configured": true, 00:18:14.731 "data_offset": 2048, 00:18:14.731 "data_size": 63488 00:18:14.731 } 00:18:14.731 ] 00:18:14.731 } 00:18:14.731 } 00:18:14.731 }' 00:18:14.731 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.731 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:14.731 BaseBdev2 00:18:14.731 BaseBdev3' 00:18:14.731 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.731 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:14.731 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.991 "name": "NewBaseBdev", 00:18:14.991 "aliases": [ 00:18:14.991 "d4e5f92c-ee1f-47c1-8655-25fc3b437812" 00:18:14.991 ], 00:18:14.991 "product_name": "Malloc disk", 00:18:14.991 "block_size": 512, 00:18:14.991 "num_blocks": 65536, 00:18:14.991 "uuid": "d4e5f92c-ee1f-47c1-8655-25fc3b437812", 00:18:14.991 "assigned_rate_limits": { 00:18:14.991 "rw_ios_per_sec": 0, 00:18:14.991 "rw_mbytes_per_sec": 0, 00:18:14.991 "r_mbytes_per_sec": 0, 00:18:14.991 "w_mbytes_per_sec": 0 00:18:14.991 }, 00:18:14.991 "claimed": true, 00:18:14.991 "claim_type": "exclusive_write", 00:18:14.991 "zoned": false, 00:18:14.991 "supported_io_types": { 00:18:14.991 "read": true, 00:18:14.991 "write": true, 00:18:14.991 "unmap": true, 00:18:14.991 "flush": true, 00:18:14.991 "reset": true, 00:18:14.991 "nvme_admin": false, 00:18:14.991 "nvme_io": false, 00:18:14.991 "nvme_io_md": false, 00:18:14.991 "write_zeroes": true, 00:18:14.991 "zcopy": true, 00:18:14.991 "get_zone_info": false, 00:18:14.991 "zone_management": false, 00:18:14.991 "zone_append": false, 00:18:14.991 "compare": false, 00:18:14.991 "compare_and_write": false, 00:18:14.991 "abort": true, 00:18:14.991 "seek_hole": false, 00:18:14.991 "seek_data": false, 00:18:14.991 "copy": true, 00:18:14.991 "nvme_iov_md": false 00:18:14.991 }, 00:18:14.991 "memory_domains": [ 00:18:14.991 { 00:18:14.991 "dma_device_id": "system", 00:18:14.991 "dma_device_type": 1 00:18:14.991 }, 00:18:14.991 { 00:18:14.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.991 "dma_device_type": 2 00:18:14.991 } 00:18:14.991 ], 00:18:14.991 "driver_specific": {} 00:18:14.991 }' 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:14.991 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.251 "name": "BaseBdev2", 00:18:15.251 "aliases": [ 00:18:15.251 "898a24b9-7d18-4b3c-b4c1-b99cc8af9828" 00:18:15.251 ], 00:18:15.251 "product_name": "Malloc disk", 00:18:15.251 "block_size": 512, 00:18:15.251 "num_blocks": 65536, 00:18:15.251 "uuid": "898a24b9-7d18-4b3c-b4c1-b99cc8af9828", 00:18:15.251 "assigned_rate_limits": { 00:18:15.251 "rw_ios_per_sec": 0, 00:18:15.251 "rw_mbytes_per_sec": 0, 00:18:15.251 "r_mbytes_per_sec": 0, 00:18:15.251 "w_mbytes_per_sec": 0 00:18:15.251 }, 00:18:15.251 "claimed": true, 00:18:15.251 "claim_type": "exclusive_write", 00:18:15.251 "zoned": false, 00:18:15.251 "supported_io_types": { 00:18:15.251 "read": true, 00:18:15.251 "write": true, 00:18:15.251 "unmap": true, 00:18:15.251 "flush": true, 00:18:15.251 "reset": true, 00:18:15.251 "nvme_admin": false, 00:18:15.251 "nvme_io": false, 00:18:15.251 "nvme_io_md": false, 00:18:15.251 "write_zeroes": true, 00:18:15.251 "zcopy": true, 00:18:15.251 "get_zone_info": false, 00:18:15.251 "zone_management": false, 00:18:15.251 "zone_append": false, 00:18:15.251 "compare": false, 00:18:15.251 "compare_and_write": false, 00:18:15.251 "abort": true, 00:18:15.251 "seek_hole": false, 00:18:15.251 "seek_data": false, 00:18:15.251 "copy": true, 00:18:15.251 "nvme_iov_md": false 00:18:15.251 }, 00:18:15.251 "memory_domains": [ 00:18:15.251 { 00:18:15.251 "dma_device_id": "system", 00:18:15.251 "dma_device_type": 1 00:18:15.251 }, 00:18:15.251 { 00:18:15.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.251 "dma_device_type": 2 00:18:15.251 } 00:18:15.251 ], 00:18:15.251 "driver_specific": {} 00:18:15.251 }' 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:15.251 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.511 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.511 "name": "BaseBdev3", 00:18:15.511 "aliases": [ 00:18:15.511 "40bd52b4-6b7f-47b1-995a-42d3a808c844" 00:18:15.511 ], 00:18:15.511 "product_name": "Malloc disk", 00:18:15.511 "block_size": 512, 00:18:15.511 "num_blocks": 65536, 00:18:15.511 "uuid": "40bd52b4-6b7f-47b1-995a-42d3a808c844", 00:18:15.511 "assigned_rate_limits": { 00:18:15.511 "rw_ios_per_sec": 0, 00:18:15.511 "rw_mbytes_per_sec": 0, 00:18:15.511 "r_mbytes_per_sec": 0, 00:18:15.511 "w_mbytes_per_sec": 0 00:18:15.511 }, 00:18:15.511 "claimed": true, 00:18:15.511 "claim_type": "exclusive_write", 00:18:15.511 "zoned": false, 00:18:15.511 "supported_io_types": { 00:18:15.511 "read": true, 00:18:15.511 "write": true, 00:18:15.511 "unmap": true, 00:18:15.511 "flush": true, 00:18:15.511 "reset": true, 00:18:15.511 "nvme_admin": false, 00:18:15.511 "nvme_io": false, 00:18:15.511 "nvme_io_md": false, 00:18:15.511 "write_zeroes": true, 00:18:15.511 "zcopy": true, 00:18:15.511 "get_zone_info": false, 00:18:15.511 "zone_management": false, 00:18:15.511 "zone_append": false, 00:18:15.511 "compare": false, 00:18:15.511 "compare_and_write": false, 00:18:15.511 "abort": true, 00:18:15.511 "seek_hole": false, 00:18:15.511 "seek_data": false, 00:18:15.511 "copy": true, 00:18:15.511 "nvme_iov_md": false 00:18:15.511 }, 00:18:15.511 "memory_domains": [ 00:18:15.511 { 00:18:15.511 "dma_device_id": "system", 00:18:15.511 "dma_device_type": 1 00:18:15.511 }, 00:18:15.511 { 00:18:15.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.511 "dma_device_type": 2 00:18:15.511 } 00:18:15.511 ], 00:18:15.511 "driver_specific": {} 00:18:15.511 }' 00:18:15.511 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.511 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.511 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.511 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.511 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.511 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.511 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.770 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.770 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.770 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.770 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.770 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.770 15:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.030 [2024-07-23 15:12:11.231196] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.030 [2024-07-23 15:12:11.231237] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.030 [2024-07-23 15:12:11.231321] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.030 [2024-07-23 15:12:11.231380] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.030 [2024-07-23 15:12:11.231410] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name Existed_Raid, state offline 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 94623 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 94623 ']' 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 94623 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94623 00:18:16.030 killing process with pid 94623 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94623' 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 94623 00:18:16.030 [2024-07-23 15:12:11.289692] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.030 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 94623 00:18:16.030 [2024-07-23 15:12:11.325235] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.290 15:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:16.290 ************************************ 00:18:16.290 END TEST raid_state_function_test_sb 00:18:16.290 ************************************ 00:18:16.290 00:18:16.290 real 0m20.210s 00:18:16.290 user 0m35.134s 00:18:16.290 sys 0m4.449s 00:18:16.290 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:16.290 15:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.290 15:12:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:16.290 15:12:11 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:18:16.290 15:12:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:16.290 15:12:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:16.290 15:12:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.290 ************************************ 00:18:16.290 START TEST raid_superblock_test 00:18:16.290 ************************************ 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=95461 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 95461 /var/tmp/spdk-raid.sock 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 95461 ']' 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:16.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.290 15:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.290 [2024-07-23 15:12:11.718534] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:18:16.290 [2024-07-23 15:12:11.718708] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95461 ] 00:18:16.549 [2024-07-23 15:12:11.871833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.549 [2024-07-23 15:12:11.918188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.549 [2024-07-23 15:12:11.963950] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:17.485 malloc1 00:18:17.485 15:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.743 [2024-07-23 15:12:13.064046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.743 [2024-07-23 15:12:13.064132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.743 [2024-07-23 15:12:13.064159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:18:17.743 [2024-07-23 15:12:13.064182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.743 [2024-07-23 15:12:13.066743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.743 [2024-07-23 15:12:13.066805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.743 pt1 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.743 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:18.001 malloc2 00:18:18.001 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:18.001 [2024-07-23 15:12:13.429709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:18.001 [2024-07-23 15:12:13.429804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.001 [2024-07-23 15:12:13.429829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:18:18.001 [2024-07-23 15:12:13.429847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.001 [2024-07-23 15:12:13.432473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.001 [2024-07-23 15:12:13.432521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:18.260 pt2 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.260 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:18.519 malloc3 00:18:18.519 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:18.519 [2024-07-23 15:12:13.871937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:18.519 [2024-07-23 15:12:13.872020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.519 [2024-07-23 15:12:13.872046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:18:18.519 [2024-07-23 15:12:13.872061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.519 [2024-07-23 15:12:13.874508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.519 [2024-07-23 15:12:13.874558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:18.519 pt3 00:18:18.519 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:18.519 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:18.519 15:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:18.778 [2024-07-23 15:12:14.040034] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.778 [2024-07-23 15:12:14.042247] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.778 [2024-07-23 15:12:14.042315] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:18.778 [2024-07-23 15:12:14.042498] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:18:18.778 [2024-07-23 15:12:14.042511] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:18.778 [2024-07-23 15:12:14.042635] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:18:18.778 [2024-07-23 15:12:14.042986] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:18:18.778 [2024-07-23 15:12:14.043012] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007880 00:18:18.778 [2024-07-23 15:12:14.043149] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.778 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.062 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.062 "name": "raid_bdev1", 00:18:19.062 "uuid": "3aa7ef81-3829-4765-b408-68e72f640cf4", 00:18:19.062 "strip_size_kb": 64, 00:18:19.062 "state": "online", 00:18:19.062 "raid_level": "concat", 00:18:19.062 "superblock": true, 00:18:19.062 "num_base_bdevs": 3, 00:18:19.062 "num_base_bdevs_discovered": 3, 00:18:19.062 "num_base_bdevs_operational": 3, 00:18:19.062 "base_bdevs_list": [ 00:18:19.062 { 00:18:19.062 "name": "pt1", 00:18:19.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.062 "is_configured": true, 00:18:19.062 "data_offset": 2048, 00:18:19.062 "data_size": 63488 00:18:19.062 }, 00:18:19.062 { 00:18:19.062 "name": "pt2", 00:18:19.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.062 "is_configured": true, 00:18:19.062 "data_offset": 2048, 00:18:19.062 "data_size": 63488 00:18:19.062 }, 00:18:19.062 { 00:18:19.062 "name": "pt3", 00:18:19.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.062 "is_configured": true, 00:18:19.062 "data_offset": 2048, 00:18:19.062 "data_size": 63488 00:18:19.063 } 00:18:19.063 ] 00:18:19.063 }' 00:18:19.063 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.063 15:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.327 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.327 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:19.327 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:19.327 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:19.327 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:19.327 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:19.327 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:19.327 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:19.586 [2024-07-23 15:12:14.764395] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:19.586 "name": "raid_bdev1", 00:18:19.586 "aliases": [ 00:18:19.586 "3aa7ef81-3829-4765-b408-68e72f640cf4" 00:18:19.586 ], 00:18:19.586 "product_name": "Raid Volume", 00:18:19.586 "block_size": 512, 00:18:19.586 "num_blocks": 190464, 00:18:19.586 "uuid": "3aa7ef81-3829-4765-b408-68e72f640cf4", 00:18:19.586 "assigned_rate_limits": { 00:18:19.586 "rw_ios_per_sec": 0, 00:18:19.586 "rw_mbytes_per_sec": 0, 00:18:19.586 "r_mbytes_per_sec": 0, 00:18:19.586 "w_mbytes_per_sec": 0 00:18:19.586 }, 00:18:19.586 "claimed": false, 00:18:19.586 "zoned": false, 00:18:19.586 "supported_io_types": { 00:18:19.586 "read": true, 00:18:19.586 "write": true, 00:18:19.586 "unmap": true, 00:18:19.586 "flush": true, 00:18:19.586 "reset": true, 00:18:19.586 "nvme_admin": false, 00:18:19.586 "nvme_io": false, 00:18:19.586 "nvme_io_md": false, 00:18:19.586 "write_zeroes": true, 00:18:19.586 "zcopy": false, 00:18:19.586 "get_zone_info": false, 00:18:19.586 "zone_management": false, 00:18:19.586 "zone_append": false, 00:18:19.586 "compare": false, 00:18:19.586 "compare_and_write": false, 00:18:19.586 "abort": false, 00:18:19.586 "seek_hole": false, 00:18:19.586 "seek_data": false, 00:18:19.586 "copy": false, 00:18:19.586 "nvme_iov_md": false 00:18:19.586 }, 00:18:19.586 "memory_domains": [ 00:18:19.586 { 00:18:19.586 "dma_device_id": "system", 00:18:19.586 "dma_device_type": 1 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.586 "dma_device_type": 2 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "dma_device_id": "system", 00:18:19.586 "dma_device_type": 1 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.586 "dma_device_type": 2 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "dma_device_id": "system", 00:18:19.586 "dma_device_type": 1 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.586 "dma_device_type": 2 00:18:19.586 } 00:18:19.586 ], 00:18:19.586 "driver_specific": { 00:18:19.586 "raid": { 00:18:19.586 "uuid": "3aa7ef81-3829-4765-b408-68e72f640cf4", 00:18:19.586 "strip_size_kb": 64, 00:18:19.586 "state": "online", 00:18:19.586 "raid_level": "concat", 00:18:19.586 "superblock": true, 00:18:19.586 "num_base_bdevs": 3, 00:18:19.586 "num_base_bdevs_discovered": 3, 00:18:19.586 "num_base_bdevs_operational": 3, 00:18:19.586 "base_bdevs_list": [ 00:18:19.586 { 00:18:19.586 "name": "pt1", 00:18:19.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.586 "is_configured": true, 00:18:19.586 "data_offset": 2048, 00:18:19.586 "data_size": 63488 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "name": "pt2", 00:18:19.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.586 "is_configured": true, 00:18:19.586 "data_offset": 2048, 00:18:19.586 "data_size": 63488 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "name": "pt3", 00:18:19.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.586 "is_configured": true, 00:18:19.586 "data_offset": 2048, 00:18:19.586 "data_size": 63488 00:18:19.586 } 00:18:19.586 ] 00:18:19.586 } 00:18:19.586 } 00:18:19.586 }' 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:19.586 pt2 00:18:19.586 pt3' 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.586 "name": "pt1", 00:18:19.586 "aliases": [ 00:18:19.586 "00000000-0000-0000-0000-000000000001" 00:18:19.586 ], 00:18:19.586 "product_name": "passthru", 00:18:19.586 "block_size": 512, 00:18:19.586 "num_blocks": 65536, 00:18:19.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.586 "assigned_rate_limits": { 00:18:19.586 "rw_ios_per_sec": 0, 00:18:19.586 "rw_mbytes_per_sec": 0, 00:18:19.586 "r_mbytes_per_sec": 0, 00:18:19.586 "w_mbytes_per_sec": 0 00:18:19.586 }, 00:18:19.586 "claimed": true, 00:18:19.586 "claim_type": "exclusive_write", 00:18:19.586 "zoned": false, 00:18:19.586 "supported_io_types": { 00:18:19.586 "read": true, 00:18:19.586 "write": true, 00:18:19.586 "unmap": true, 00:18:19.586 "flush": true, 00:18:19.586 "reset": true, 00:18:19.586 "nvme_admin": false, 00:18:19.586 "nvme_io": false, 00:18:19.586 "nvme_io_md": false, 00:18:19.586 "write_zeroes": true, 00:18:19.586 "zcopy": true, 00:18:19.586 "get_zone_info": false, 00:18:19.586 "zone_management": false, 00:18:19.586 "zone_append": false, 00:18:19.586 "compare": false, 00:18:19.586 "compare_and_write": false, 00:18:19.586 "abort": true, 00:18:19.586 "seek_hole": false, 00:18:19.586 "seek_data": false, 00:18:19.586 "copy": true, 00:18:19.586 "nvme_iov_md": false 00:18:19.586 }, 00:18:19.586 "memory_domains": [ 00:18:19.586 { 00:18:19.586 "dma_device_id": "system", 00:18:19.586 "dma_device_type": 1 00:18:19.586 }, 00:18:19.586 { 00:18:19.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.586 "dma_device_type": 2 00:18:19.586 } 00:18:19.586 ], 00:18:19.586 "driver_specific": { 00:18:19.586 "passthru": { 00:18:19.586 "name": "pt1", 00:18:19.586 "base_bdev_name": "malloc1" 00:18:19.586 } 00:18:19.586 } 00:18:19.586 }' 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.586 15:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.586 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.586 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.586 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.846 "name": "pt2", 00:18:19.846 "aliases": [ 00:18:19.846 "00000000-0000-0000-0000-000000000002" 00:18:19.846 ], 00:18:19.846 "product_name": "passthru", 00:18:19.846 "block_size": 512, 00:18:19.846 "num_blocks": 65536, 00:18:19.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.846 "assigned_rate_limits": { 00:18:19.846 "rw_ios_per_sec": 0, 00:18:19.846 "rw_mbytes_per_sec": 0, 00:18:19.846 "r_mbytes_per_sec": 0, 00:18:19.846 "w_mbytes_per_sec": 0 00:18:19.846 }, 00:18:19.846 "claimed": true, 00:18:19.846 "claim_type": "exclusive_write", 00:18:19.846 "zoned": false, 00:18:19.846 "supported_io_types": { 00:18:19.846 "read": true, 00:18:19.846 "write": true, 00:18:19.846 "unmap": true, 00:18:19.846 "flush": true, 00:18:19.846 "reset": true, 00:18:19.846 "nvme_admin": false, 00:18:19.846 "nvme_io": false, 00:18:19.846 "nvme_io_md": false, 00:18:19.846 "write_zeroes": true, 00:18:19.846 "zcopy": true, 00:18:19.846 "get_zone_info": false, 00:18:19.846 "zone_management": false, 00:18:19.846 "zone_append": false, 00:18:19.846 "compare": false, 00:18:19.846 "compare_and_write": false, 00:18:19.846 "abort": true, 00:18:19.846 "seek_hole": false, 00:18:19.846 "seek_data": false, 00:18:19.846 "copy": true, 00:18:19.846 "nvme_iov_md": false 00:18:19.846 }, 00:18:19.846 "memory_domains": [ 00:18:19.846 { 00:18:19.846 "dma_device_id": "system", 00:18:19.846 "dma_device_type": 1 00:18:19.846 }, 00:18:19.846 { 00:18:19.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.846 "dma_device_type": 2 00:18:19.846 } 00:18:19.846 ], 00:18:19.846 "driver_specific": { 00:18:19.846 "passthru": { 00:18:19.846 "name": "pt2", 00:18:19.846 "base_bdev_name": "malloc2" 00:18:19.846 } 00:18:19.846 } 00:18:19.846 }' 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.846 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:20.105 "name": "pt3", 00:18:20.105 "aliases": [ 00:18:20.105 "00000000-0000-0000-0000-000000000003" 00:18:20.105 ], 00:18:20.105 "product_name": "passthru", 00:18:20.105 "block_size": 512, 00:18:20.105 "num_blocks": 65536, 00:18:20.105 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.105 "assigned_rate_limits": { 00:18:20.105 "rw_ios_per_sec": 0, 00:18:20.105 "rw_mbytes_per_sec": 0, 00:18:20.105 "r_mbytes_per_sec": 0, 00:18:20.105 "w_mbytes_per_sec": 0 00:18:20.105 }, 00:18:20.105 "claimed": true, 00:18:20.105 "claim_type": "exclusive_write", 00:18:20.105 "zoned": false, 00:18:20.105 "supported_io_types": { 00:18:20.105 "read": true, 00:18:20.105 "write": true, 00:18:20.105 "unmap": true, 00:18:20.105 "flush": true, 00:18:20.105 "reset": true, 00:18:20.105 "nvme_admin": false, 00:18:20.105 "nvme_io": false, 00:18:20.105 "nvme_io_md": false, 00:18:20.105 "write_zeroes": true, 00:18:20.105 "zcopy": true, 00:18:20.105 "get_zone_info": false, 00:18:20.105 "zone_management": false, 00:18:20.105 "zone_append": false, 00:18:20.105 "compare": false, 00:18:20.105 "compare_and_write": false, 00:18:20.105 "abort": true, 00:18:20.105 "seek_hole": false, 00:18:20.105 "seek_data": false, 00:18:20.105 "copy": true, 00:18:20.105 "nvme_iov_md": false 00:18:20.105 }, 00:18:20.105 "memory_domains": [ 00:18:20.105 { 00:18:20.105 "dma_device_id": "system", 00:18:20.105 "dma_device_type": 1 00:18:20.105 }, 00:18:20.105 { 00:18:20.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.105 "dma_device_type": 2 00:18:20.105 } 00:18:20.105 ], 00:18:20.105 "driver_specific": { 00:18:20.105 "passthru": { 00:18:20.105 "name": "pt3", 00:18:20.105 "base_bdev_name": "malloc3" 00:18:20.105 } 00:18:20.105 } 00:18:20.105 }' 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.105 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:20.364 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:20.623 [2024-07-23 15:12:15.844612] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.623 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3aa7ef81-3829-4765-b408-68e72f640cf4 00:18:20.623 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3aa7ef81-3829-4765-b408-68e72f640cf4 ']' 00:18:20.623 15:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:20.623 [2024-07-23 15:12:16.036360] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.623 [2024-07-23 15:12:16.036412] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.623 [2024-07-23 15:12:16.036515] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.623 [2024-07-23 15:12:16.036577] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.623 [2024-07-23 15:12:16.036592] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name raid_bdev1, state offline 00:18:20.882 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:20.882 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.140 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:21.140 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:21.140 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:21.140 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:21.140 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:21.140 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:21.399 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:21.399 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:21.658 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:21.658 15:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:21.658 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:21.659 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:21.918 [2024-07-23 15:12:17.312669] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:21.918 [2024-07-23 15:12:17.314847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:21.918 [2024-07-23 15:12:17.314905] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:21.918 [2024-07-23 15:12:17.314960] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:21.918 [2024-07-23 15:12:17.315019] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:21.918 [2024-07-23 15:12:17.315051] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:21.918 [2024-07-23 15:12:17.315068] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.918 [2024-07-23 15:12:17.315081] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state configuring 00:18:21.918 request: 00:18:21.918 { 00:18:21.918 "name": "raid_bdev1", 00:18:21.918 "raid_level": "concat", 00:18:21.918 "base_bdevs": [ 00:18:21.918 "malloc1", 00:18:21.918 "malloc2", 00:18:21.918 "malloc3" 00:18:21.918 ], 00:18:21.918 "strip_size_kb": 64, 00:18:21.918 "superblock": false, 00:18:21.918 "method": "bdev_raid_create", 00:18:21.918 "req_id": 1 00:18:21.918 } 00:18:21.918 Got JSON-RPC error response 00:18:21.918 response: 00:18:21.918 { 00:18:21.918 "code": -17, 00:18:21.918 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:21.918 } 00:18:21.918 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:21.918 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:21.918 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:21.918 15:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:21.918 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.918 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:22.177 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:22.177 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:22.177 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:22.436 [2024-07-23 15:12:17.752645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:22.436 [2024-07-23 15:12:17.752738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.436 [2024-07-23 15:12:17.752761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:18:22.436 [2024-07-23 15:12:17.752777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.436 [2024-07-23 15:12:17.755401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.436 [2024-07-23 15:12:17.755451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:22.436 [2024-07-23 15:12:17.755527] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:22.436 [2024-07-23 15:12:17.755579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:22.436 pt1 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.436 15:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.695 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.695 "name": "raid_bdev1", 00:18:22.695 "uuid": "3aa7ef81-3829-4765-b408-68e72f640cf4", 00:18:22.695 "strip_size_kb": 64, 00:18:22.695 "state": "configuring", 00:18:22.695 "raid_level": "concat", 00:18:22.695 "superblock": true, 00:18:22.695 "num_base_bdevs": 3, 00:18:22.695 "num_base_bdevs_discovered": 1, 00:18:22.695 "num_base_bdevs_operational": 3, 00:18:22.695 "base_bdevs_list": [ 00:18:22.695 { 00:18:22.695 "name": "pt1", 00:18:22.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.695 "is_configured": true, 00:18:22.695 "data_offset": 2048, 00:18:22.695 "data_size": 63488 00:18:22.695 }, 00:18:22.695 { 00:18:22.695 "name": null, 00:18:22.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.695 "is_configured": false, 00:18:22.696 "data_offset": 2048, 00:18:22.696 "data_size": 63488 00:18:22.696 }, 00:18:22.696 { 00:18:22.696 "name": null, 00:18:22.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:22.696 "is_configured": false, 00:18:22.696 "data_offset": 2048, 00:18:22.696 "data_size": 63488 00:18:22.696 } 00:18:22.696 ] 00:18:22.696 }' 00:18:22.696 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.696 15:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:18:22.955 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:23.214 [2024-07-23 15:12:18.420809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:23.214 [2024-07-23 15:12:18.420909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.214 [2024-07-23 15:12:18.420935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:18:23.214 [2024-07-23 15:12:18.420951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.214 [2024-07-23 15:12:18.421360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.214 [2024-07-23 15:12:18.421396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:23.214 [2024-07-23 15:12:18.421470] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:23.214 [2024-07-23 15:12:18.421503] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.214 pt2 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:23.214 [2024-07-23 15:12:18.604959] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.214 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.473 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.473 "name": "raid_bdev1", 00:18:23.473 "uuid": "3aa7ef81-3829-4765-b408-68e72f640cf4", 00:18:23.473 "strip_size_kb": 64, 00:18:23.473 "state": "configuring", 00:18:23.473 "raid_level": "concat", 00:18:23.473 "superblock": true, 00:18:23.473 "num_base_bdevs": 3, 00:18:23.473 "num_base_bdevs_discovered": 1, 00:18:23.473 "num_base_bdevs_operational": 3, 00:18:23.473 "base_bdevs_list": [ 00:18:23.473 { 00:18:23.473 "name": "pt1", 00:18:23.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:23.473 "is_configured": true, 00:18:23.473 "data_offset": 2048, 00:18:23.473 "data_size": 63488 00:18:23.473 }, 00:18:23.473 { 00:18:23.473 "name": null, 00:18:23.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.473 "is_configured": false, 00:18:23.473 "data_offset": 2048, 00:18:23.473 "data_size": 63488 00:18:23.473 }, 00:18:23.473 { 00:18:23.473 "name": null, 00:18:23.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.473 "is_configured": false, 00:18:23.473 "data_offset": 2048, 00:18:23.473 "data_size": 63488 00:18:23.473 } 00:18:23.473 ] 00:18:23.473 }' 00:18:23.473 15:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.473 15:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.732 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:23.732 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:23.732 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:23.990 [2024-07-23 15:12:19.301047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:23.990 [2024-07-23 15:12:19.301127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.990 [2024-07-23 15:12:19.301153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:18:23.990 [2024-07-23 15:12:19.301166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.990 [2024-07-23 15:12:19.301577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.990 [2024-07-23 15:12:19.301606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:23.990 [2024-07-23 15:12:19.301681] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:23.990 [2024-07-23 15:12:19.301704] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.990 pt2 00:18:23.990 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:23.990 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:23.990 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:24.249 [2024-07-23 15:12:19.561089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:24.249 [2024-07-23 15:12:19.561176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.249 [2024-07-23 15:12:19.561201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:18:24.249 [2024-07-23 15:12:19.561213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.249 [2024-07-23 15:12:19.561626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.249 [2024-07-23 15:12:19.561654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:24.249 [2024-07-23 15:12:19.561733] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:24.249 [2024-07-23 15:12:19.561757] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:24.249 [2024-07-23 15:12:19.561885] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:18:24.249 [2024-07-23 15:12:19.561895] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:24.249 [2024-07-23 15:12:19.561976] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:18:24.249 [2024-07-23 15:12:19.562253] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:18:24.249 [2024-07-23 15:12:19.562279] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:18:24.249 [2024-07-23 15:12:19.562375] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.249 pt3 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.249 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.507 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.507 "name": "raid_bdev1", 00:18:24.507 "uuid": "3aa7ef81-3829-4765-b408-68e72f640cf4", 00:18:24.507 "strip_size_kb": 64, 00:18:24.507 "state": "online", 00:18:24.507 "raid_level": "concat", 00:18:24.507 "superblock": true, 00:18:24.507 "num_base_bdevs": 3, 00:18:24.507 "num_base_bdevs_discovered": 3, 00:18:24.507 "num_base_bdevs_operational": 3, 00:18:24.507 "base_bdevs_list": [ 00:18:24.507 { 00:18:24.507 "name": "pt1", 00:18:24.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.507 "is_configured": true, 00:18:24.507 "data_offset": 2048, 00:18:24.507 "data_size": 63488 00:18:24.507 }, 00:18:24.507 { 00:18:24.507 "name": "pt2", 00:18:24.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.507 "is_configured": true, 00:18:24.507 "data_offset": 2048, 00:18:24.507 "data_size": 63488 00:18:24.507 }, 00:18:24.507 { 00:18:24.507 "name": "pt3", 00:18:24.507 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:24.507 "is_configured": true, 00:18:24.507 "data_offset": 2048, 00:18:24.507 "data_size": 63488 00:18:24.507 } 00:18:24.507 ] 00:18:24.507 }' 00:18:24.508 15:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.508 15:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.766 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:24.766 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:24.766 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:24.766 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:24.766 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:24.766 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:24.766 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:24.766 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:25.025 [2024-07-23 15:12:20.257526] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.025 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:25.025 "name": "raid_bdev1", 00:18:25.025 "aliases": [ 00:18:25.025 "3aa7ef81-3829-4765-b408-68e72f640cf4" 00:18:25.025 ], 00:18:25.025 "product_name": "Raid Volume", 00:18:25.025 "block_size": 512, 00:18:25.025 "num_blocks": 190464, 00:18:25.025 "uuid": "3aa7ef81-3829-4765-b408-68e72f640cf4", 00:18:25.025 "assigned_rate_limits": { 00:18:25.025 "rw_ios_per_sec": 0, 00:18:25.025 "rw_mbytes_per_sec": 0, 00:18:25.025 "r_mbytes_per_sec": 0, 00:18:25.025 "w_mbytes_per_sec": 0 00:18:25.025 }, 00:18:25.025 "claimed": false, 00:18:25.025 "zoned": false, 00:18:25.025 "supported_io_types": { 00:18:25.025 "read": true, 00:18:25.025 "write": true, 00:18:25.025 "unmap": true, 00:18:25.025 "flush": true, 00:18:25.025 "reset": true, 00:18:25.025 "nvme_admin": false, 00:18:25.025 "nvme_io": false, 00:18:25.025 "nvme_io_md": false, 00:18:25.025 "write_zeroes": true, 00:18:25.025 "zcopy": false, 00:18:25.025 "get_zone_info": false, 00:18:25.025 "zone_management": false, 00:18:25.025 "zone_append": false, 00:18:25.025 "compare": false, 00:18:25.025 "compare_and_write": false, 00:18:25.025 "abort": false, 00:18:25.025 "seek_hole": false, 00:18:25.025 "seek_data": false, 00:18:25.025 "copy": false, 00:18:25.025 "nvme_iov_md": false 00:18:25.025 }, 00:18:25.025 "memory_domains": [ 00:18:25.025 { 00:18:25.025 "dma_device_id": "system", 00:18:25.025 "dma_device_type": 1 00:18:25.025 }, 00:18:25.025 { 00:18:25.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.025 "dma_device_type": 2 00:18:25.025 }, 00:18:25.025 { 00:18:25.025 "dma_device_id": "system", 00:18:25.025 "dma_device_type": 1 00:18:25.025 }, 00:18:25.025 { 00:18:25.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.025 "dma_device_type": 2 00:18:25.026 }, 00:18:25.026 { 00:18:25.026 "dma_device_id": "system", 00:18:25.026 "dma_device_type": 1 00:18:25.026 }, 00:18:25.026 { 00:18:25.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.026 "dma_device_type": 2 00:18:25.026 } 00:18:25.026 ], 00:18:25.026 "driver_specific": { 00:18:25.026 "raid": { 00:18:25.026 "uuid": "3aa7ef81-3829-4765-b408-68e72f640cf4", 00:18:25.026 "strip_size_kb": 64, 00:18:25.026 "state": "online", 00:18:25.026 "raid_level": "concat", 00:18:25.026 "superblock": true, 00:18:25.026 "num_base_bdevs": 3, 00:18:25.026 "num_base_bdevs_discovered": 3, 00:18:25.026 "num_base_bdevs_operational": 3, 00:18:25.026 "base_bdevs_list": [ 00:18:25.026 { 00:18:25.026 "name": "pt1", 00:18:25.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.026 "is_configured": true, 00:18:25.026 "data_offset": 2048, 00:18:25.026 "data_size": 63488 00:18:25.026 }, 00:18:25.026 { 00:18:25.026 "name": "pt2", 00:18:25.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.026 "is_configured": true, 00:18:25.026 "data_offset": 2048, 00:18:25.026 "data_size": 63488 00:18:25.026 }, 00:18:25.026 { 00:18:25.026 "name": "pt3", 00:18:25.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:25.026 "is_configured": true, 00:18:25.026 "data_offset": 2048, 00:18:25.026 "data_size": 63488 00:18:25.026 } 00:18:25.026 ] 00:18:25.026 } 00:18:25.026 } 00:18:25.026 }' 00:18:25.026 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:25.026 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:25.026 pt2 00:18:25.026 pt3' 00:18:25.026 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:25.026 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:25.026 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:25.286 "name": "pt1", 00:18:25.286 "aliases": [ 00:18:25.286 "00000000-0000-0000-0000-000000000001" 00:18:25.286 ], 00:18:25.286 "product_name": "passthru", 00:18:25.286 "block_size": 512, 00:18:25.286 "num_blocks": 65536, 00:18:25.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.286 "assigned_rate_limits": { 00:18:25.286 "rw_ios_per_sec": 0, 00:18:25.286 "rw_mbytes_per_sec": 0, 00:18:25.286 "r_mbytes_per_sec": 0, 00:18:25.286 "w_mbytes_per_sec": 0 00:18:25.286 }, 00:18:25.286 "claimed": true, 00:18:25.286 "claim_type": "exclusive_write", 00:18:25.286 "zoned": false, 00:18:25.286 "supported_io_types": { 00:18:25.286 "read": true, 00:18:25.286 "write": true, 00:18:25.286 "unmap": true, 00:18:25.286 "flush": true, 00:18:25.286 "reset": true, 00:18:25.286 "nvme_admin": false, 00:18:25.286 "nvme_io": false, 00:18:25.286 "nvme_io_md": false, 00:18:25.286 "write_zeroes": true, 00:18:25.286 "zcopy": true, 00:18:25.286 "get_zone_info": false, 00:18:25.286 "zone_management": false, 00:18:25.286 "zone_append": false, 00:18:25.286 "compare": false, 00:18:25.286 "compare_and_write": false, 00:18:25.286 "abort": true, 00:18:25.286 "seek_hole": false, 00:18:25.286 "seek_data": false, 00:18:25.286 "copy": true, 00:18:25.286 "nvme_iov_md": false 00:18:25.286 }, 00:18:25.286 "memory_domains": [ 00:18:25.286 { 00:18:25.286 "dma_device_id": "system", 00:18:25.286 "dma_device_type": 1 00:18:25.286 }, 00:18:25.286 { 00:18:25.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.286 "dma_device_type": 2 00:18:25.286 } 00:18:25.286 ], 00:18:25.286 "driver_specific": { 00:18:25.286 "passthru": { 00:18:25.286 "name": "pt1", 00:18:25.286 "base_bdev_name": "malloc1" 00:18:25.286 } 00:18:25.286 } 00:18:25.286 }' 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:25.286 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:25.558 "name": "pt2", 00:18:25.558 "aliases": [ 00:18:25.558 "00000000-0000-0000-0000-000000000002" 00:18:25.558 ], 00:18:25.558 "product_name": "passthru", 00:18:25.558 "block_size": 512, 00:18:25.558 "num_blocks": 65536, 00:18:25.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.558 "assigned_rate_limits": { 00:18:25.558 "rw_ios_per_sec": 0, 00:18:25.558 "rw_mbytes_per_sec": 0, 00:18:25.558 "r_mbytes_per_sec": 0, 00:18:25.558 "w_mbytes_per_sec": 0 00:18:25.558 }, 00:18:25.558 "claimed": true, 00:18:25.558 "claim_type": "exclusive_write", 00:18:25.558 "zoned": false, 00:18:25.558 "supported_io_types": { 00:18:25.558 "read": true, 00:18:25.558 "write": true, 00:18:25.558 "unmap": true, 00:18:25.558 "flush": true, 00:18:25.558 "reset": true, 00:18:25.558 "nvme_admin": false, 00:18:25.558 "nvme_io": false, 00:18:25.558 "nvme_io_md": false, 00:18:25.558 "write_zeroes": true, 00:18:25.558 "zcopy": true, 00:18:25.558 "get_zone_info": false, 00:18:25.558 "zone_management": false, 00:18:25.558 "zone_append": false, 00:18:25.558 "compare": false, 00:18:25.558 "compare_and_write": false, 00:18:25.558 "abort": true, 00:18:25.558 "seek_hole": false, 00:18:25.558 "seek_data": false, 00:18:25.558 "copy": true, 00:18:25.558 "nvme_iov_md": false 00:18:25.558 }, 00:18:25.558 "memory_domains": [ 00:18:25.558 { 00:18:25.558 "dma_device_id": "system", 00:18:25.558 "dma_device_type": 1 00:18:25.558 }, 00:18:25.558 { 00:18:25.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.558 "dma_device_type": 2 00:18:25.558 } 00:18:25.558 ], 00:18:25.558 "driver_specific": { 00:18:25.558 "passthru": { 00:18:25.558 "name": "pt2", 00:18:25.558 "base_bdev_name": "malloc2" 00:18:25.558 } 00:18:25.558 } 00:18:25.558 }' 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:25.558 15:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:25.817 "name": "pt3", 00:18:25.817 "aliases": [ 00:18:25.817 "00000000-0000-0000-0000-000000000003" 00:18:25.817 ], 00:18:25.817 "product_name": "passthru", 00:18:25.817 "block_size": 512, 00:18:25.817 "num_blocks": 65536, 00:18:25.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:25.817 "assigned_rate_limits": { 00:18:25.817 "rw_ios_per_sec": 0, 00:18:25.817 "rw_mbytes_per_sec": 0, 00:18:25.817 "r_mbytes_per_sec": 0, 00:18:25.817 "w_mbytes_per_sec": 0 00:18:25.817 }, 00:18:25.817 "claimed": true, 00:18:25.817 "claim_type": "exclusive_write", 00:18:25.817 "zoned": false, 00:18:25.817 "supported_io_types": { 00:18:25.817 "read": true, 00:18:25.817 "write": true, 00:18:25.817 "unmap": true, 00:18:25.817 "flush": true, 00:18:25.817 "reset": true, 00:18:25.817 "nvme_admin": false, 00:18:25.817 "nvme_io": false, 00:18:25.817 "nvme_io_md": false, 00:18:25.817 "write_zeroes": true, 00:18:25.817 "zcopy": true, 00:18:25.817 "get_zone_info": false, 00:18:25.817 "zone_management": false, 00:18:25.817 "zone_append": false, 00:18:25.817 "compare": false, 00:18:25.817 "compare_and_write": false, 00:18:25.817 "abort": true, 00:18:25.817 "seek_hole": false, 00:18:25.817 "seek_data": false, 00:18:25.817 "copy": true, 00:18:25.817 "nvme_iov_md": false 00:18:25.817 }, 00:18:25.817 "memory_domains": [ 00:18:25.817 { 00:18:25.817 "dma_device_id": "system", 00:18:25.817 "dma_device_type": 1 00:18:25.817 }, 00:18:25.817 { 00:18:25.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.817 "dma_device_type": 2 00:18:25.817 } 00:18:25.817 ], 00:18:25.817 "driver_specific": { 00:18:25.817 "passthru": { 00:18:25.817 "name": "pt3", 00:18:25.817 "base_bdev_name": "malloc3" 00:18:25.817 } 00:18:25.817 } 00:18:25.817 }' 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:25.817 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:26.077 [2024-07-23 15:12:21.369851] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3aa7ef81-3829-4765-b408-68e72f640cf4 '!=' 3aa7ef81-3829-4765-b408-68e72f640cf4 ']' 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 95461 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 95461 ']' 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 95461 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95461 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.077 killing process with pid 95461 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95461' 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 95461 00:18:26.077 [2024-07-23 15:12:21.428517] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.077 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 95461 00:18:26.077 [2024-07-23 15:12:21.428610] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.077 [2024-07-23 15:12:21.428678] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.077 [2024-07-23 15:12:21.428692] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:18:26.077 [2024-07-23 15:12:21.464980] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.336 15:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:26.336 00:18:26.336 real 0m10.060s 00:18:26.336 user 0m17.009s 00:18:26.336 sys 0m2.226s 00:18:26.336 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.336 15:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.336 ************************************ 00:18:26.336 END TEST raid_superblock_test 00:18:26.336 ************************************ 00:18:26.336 15:12:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:26.336 15:12:21 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:18:26.336 15:12:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:26.336 15:12:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.336 15:12:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.596 ************************************ 00:18:26.596 START TEST raid_read_error_test 00:18:26.596 ************************************ 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hoGfa977iu 00:18:26.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=95878 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 95878 /var/tmp/spdk-raid.sock 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 95878 ']' 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.596 15:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.596 [2024-07-23 15:12:21.841945] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:18:26.596 [2024-07-23 15:12:21.842279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95878 ] 00:18:26.596 [2024-07-23 15:12:21.984812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.855 [2024-07-23 15:12:22.032961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.855 [2024-07-23 15:12:22.078984] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.424 15:12:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.424 15:12:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:27.424 15:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:27.424 15:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:27.424 BaseBdev1_malloc 00:18:27.424 15:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:27.682 true 00:18:27.682 15:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:27.941 [2024-07-23 15:12:23.211134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:27.942 [2024-07-23 15:12:23.211219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.942 [2024-07-23 15:12:23.211266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:18:27.942 [2024-07-23 15:12:23.211280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.942 [2024-07-23 15:12:23.213976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.942 [2024-07-23 15:12:23.214019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:27.942 BaseBdev1 00:18:27.942 15:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:27.942 15:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:28.199 BaseBdev2_malloc 00:18:28.199 15:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:28.200 true 00:18:28.200 15:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:28.458 [2024-07-23 15:12:23.744871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:28.458 [2024-07-23 15:12:23.744947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.458 [2024-07-23 15:12:23.744978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:18:28.458 [2024-07-23 15:12:23.744991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.458 [2024-07-23 15:12:23.747486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.458 [2024-07-23 15:12:23.747529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:28.458 BaseBdev2 00:18:28.458 15:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:28.458 15:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:28.717 BaseBdev3_malloc 00:18:28.717 15:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:28.717 true 00:18:28.717 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:28.976 [2024-07-23 15:12:24.288078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:28.976 [2024-07-23 15:12:24.288151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.976 [2024-07-23 15:12:24.288195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:18:28.976 [2024-07-23 15:12:24.288207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.976 [2024-07-23 15:12:24.290692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.976 [2024-07-23 15:12:24.290735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:28.976 BaseBdev3 00:18:28.976 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:29.235 [2024-07-23 15:12:24.468196] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.235 [2024-07-23 15:12:24.470503] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.235 [2024-07-23 15:12:24.470608] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.235 [2024-07-23 15:12:24.470822] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:18:29.235 [2024-07-23 15:12:24.470844] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:29.235 [2024-07-23 15:12:24.470989] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:18:29.235 [2024-07-23 15:12:24.471331] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:18:29.235 [2024-07-23 15:12:24.471355] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:18:29.235 [2024-07-23 15:12:24.471492] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.235 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.236 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.495 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.495 "name": "raid_bdev1", 00:18:29.495 "uuid": "ddab02e3-a0fe-471a-8cea-85c3a490da16", 00:18:29.495 "strip_size_kb": 64, 00:18:29.495 "state": "online", 00:18:29.495 "raid_level": "concat", 00:18:29.495 "superblock": true, 00:18:29.495 "num_base_bdevs": 3, 00:18:29.495 "num_base_bdevs_discovered": 3, 00:18:29.495 "num_base_bdevs_operational": 3, 00:18:29.495 "base_bdevs_list": [ 00:18:29.495 { 00:18:29.495 "name": "BaseBdev1", 00:18:29.495 "uuid": "b56e57e2-35a0-5419-bd96-f75bb8494c57", 00:18:29.495 "is_configured": true, 00:18:29.495 "data_offset": 2048, 00:18:29.495 "data_size": 63488 00:18:29.495 }, 00:18:29.495 { 00:18:29.495 "name": "BaseBdev2", 00:18:29.495 "uuid": "6c5107c4-6257-568d-b379-eb462f11a7aa", 00:18:29.495 "is_configured": true, 00:18:29.495 "data_offset": 2048, 00:18:29.495 "data_size": 63488 00:18:29.495 }, 00:18:29.495 { 00:18:29.495 "name": "BaseBdev3", 00:18:29.495 "uuid": "5ce57b5d-2687-5002-be78-2482a1479980", 00:18:29.495 "is_configured": true, 00:18:29.495 "data_offset": 2048, 00:18:29.495 "data_size": 63488 00:18:29.495 } 00:18:29.495 ] 00:18:29.495 }' 00:18:29.495 15:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.495 15:12:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.755 15:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:29.755 15:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:29.755 [2024-07-23 15:12:25.128730] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:18:30.693 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.952 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.211 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.211 "name": "raid_bdev1", 00:18:31.211 "uuid": "ddab02e3-a0fe-471a-8cea-85c3a490da16", 00:18:31.211 "strip_size_kb": 64, 00:18:31.211 "state": "online", 00:18:31.211 "raid_level": "concat", 00:18:31.211 "superblock": true, 00:18:31.211 "num_base_bdevs": 3, 00:18:31.211 "num_base_bdevs_discovered": 3, 00:18:31.211 "num_base_bdevs_operational": 3, 00:18:31.211 "base_bdevs_list": [ 00:18:31.211 { 00:18:31.211 "name": "BaseBdev1", 00:18:31.211 "uuid": "b56e57e2-35a0-5419-bd96-f75bb8494c57", 00:18:31.211 "is_configured": true, 00:18:31.211 "data_offset": 2048, 00:18:31.211 "data_size": 63488 00:18:31.211 }, 00:18:31.211 { 00:18:31.211 "name": "BaseBdev2", 00:18:31.211 "uuid": "6c5107c4-6257-568d-b379-eb462f11a7aa", 00:18:31.211 "is_configured": true, 00:18:31.211 "data_offset": 2048, 00:18:31.211 "data_size": 63488 00:18:31.211 }, 00:18:31.211 { 00:18:31.211 "name": "BaseBdev3", 00:18:31.211 "uuid": "5ce57b5d-2687-5002-be78-2482a1479980", 00:18:31.211 "is_configured": true, 00:18:31.211 "data_offset": 2048, 00:18:31.211 "data_size": 63488 00:18:31.211 } 00:18:31.211 ] 00:18:31.211 }' 00:18:31.211 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.211 15:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.469 15:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:31.728 [2024-07-23 15:12:26.991236] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.728 [2024-07-23 15:12:26.991289] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.728 [2024-07-23 15:12:26.993719] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.728 [2024-07-23 15:12:26.993785] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.728 [2024-07-23 15:12:26.993833] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.728 [2024-07-23 15:12:26.993849] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:18:31.728 0 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 95878 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 95878 ']' 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 95878 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95878 00:18:31.728 killing process with pid 95878 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95878' 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 95878 00:18:31.728 [2024-07-23 15:12:27.050948] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.728 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 95878 00:18:31.728 [2024-07-23 15:12:27.076980] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hoGfa977iu 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.54 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.54 != \0\.\0\0 ]] 00:18:31.987 00:18:31.987 real 0m5.555s 00:18:31.987 user 0m8.279s 00:18:31.987 sys 0m1.014s 00:18:31.987 ************************************ 00:18:31.987 END TEST raid_read_error_test 00:18:31.987 ************************************ 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.987 15:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.987 15:12:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:31.987 15:12:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:18:31.987 15:12:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:31.987 15:12:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.987 15:12:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.987 ************************************ 00:18:31.987 START TEST raid_write_error_test 00:18:31.987 ************************************ 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:31.987 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.zpxX6iHcMe 00:18:31.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=96047 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 96047 /var/tmp/spdk-raid.sock 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 96047 ']' 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.988 15:12:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.276 [2024-07-23 15:12:27.469658] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:18:32.276 [2024-07-23 15:12:27.470191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96047 ] 00:18:32.276 [2024-07-23 15:12:27.620648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.276 [2024-07-23 15:12:27.665615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.534 [2024-07-23 15:12:27.711939] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.101 15:12:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.101 15:12:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:33.101 15:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:33.101 15:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:33.101 BaseBdev1_malloc 00:18:33.101 15:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:33.360 true 00:18:33.360 15:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:33.619 [2024-07-23 15:12:28.904010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:33.619 [2024-07-23 15:12:28.904101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.619 [2024-07-23 15:12:28.904143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:18:33.619 [2024-07-23 15:12:28.904156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.619 [2024-07-23 15:12:28.906758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.619 [2024-07-23 15:12:28.906945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.619 BaseBdev1 00:18:33.619 15:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:33.619 15:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.878 BaseBdev2_malloc 00:18:33.878 15:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:34.136 true 00:18:34.136 15:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:34.136 [2024-07-23 15:12:29.489677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:34.136 [2024-07-23 15:12:29.489957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.136 [2024-07-23 15:12:29.490026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:18:34.136 [2024-07-23 15:12:29.490114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.136 [2024-07-23 15:12:29.492656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.136 [2024-07-23 15:12:29.492810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.136 BaseBdev2 00:18:34.136 15:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:34.136 15:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:34.395 BaseBdev3_malloc 00:18:34.395 15:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:34.653 true 00:18:34.653 15:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:34.913 [2024-07-23 15:12:30.115903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:34.913 [2024-07-23 15:12:30.115988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.913 [2024-07-23 15:12:30.116019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:18:34.913 [2024-07-23 15:12:30.116032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.913 [2024-07-23 15:12:30.118523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.913 [2024-07-23 15:12:30.118568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:34.913 BaseBdev3 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:34.913 [2024-07-23 15:12:30.300002] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.913 [2024-07-23 15:12:30.302212] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.913 [2024-07-23 15:12:30.302298] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:34.913 [2024-07-23 15:12:30.302498] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:18:34.913 [2024-07-23 15:12:30.302517] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:34.913 [2024-07-23 15:12:30.302675] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:18:34.913 [2024-07-23 15:12:30.303022] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:18:34.913 [2024-07-23 15:12:30.303036] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:18:34.913 [2024-07-23 15:12:30.303171] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.913 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.172 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.172 "name": "raid_bdev1", 00:18:35.172 "uuid": "dfb0d46e-e81f-48d0-b7f9-d1955acf70ea", 00:18:35.172 "strip_size_kb": 64, 00:18:35.172 "state": "online", 00:18:35.172 "raid_level": "concat", 00:18:35.172 "superblock": true, 00:18:35.172 "num_base_bdevs": 3, 00:18:35.172 "num_base_bdevs_discovered": 3, 00:18:35.172 "num_base_bdevs_operational": 3, 00:18:35.172 "base_bdevs_list": [ 00:18:35.172 { 00:18:35.172 "name": "BaseBdev1", 00:18:35.172 "uuid": "3b4e948a-d82b-5103-a329-e2520bd669c3", 00:18:35.172 "is_configured": true, 00:18:35.172 "data_offset": 2048, 00:18:35.172 "data_size": 63488 00:18:35.172 }, 00:18:35.172 { 00:18:35.172 "name": "BaseBdev2", 00:18:35.172 "uuid": "8c25f55c-26e1-5feb-bff1-f0c66c59739c", 00:18:35.172 "is_configured": true, 00:18:35.172 "data_offset": 2048, 00:18:35.172 "data_size": 63488 00:18:35.172 }, 00:18:35.172 { 00:18:35.172 "name": "BaseBdev3", 00:18:35.172 "uuid": "af39e5df-e261-52de-a153-f5d2836f85f8", 00:18:35.172 "is_configured": true, 00:18:35.172 "data_offset": 2048, 00:18:35.172 "data_size": 63488 00:18:35.172 } 00:18:35.172 ] 00:18:35.172 }' 00:18:35.172 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.172 15:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.431 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:35.431 15:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:35.431 [2024-07-23 15:12:30.844510] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:18:36.369 15:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.628 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.887 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.887 "name": "raid_bdev1", 00:18:36.887 "uuid": "dfb0d46e-e81f-48d0-b7f9-d1955acf70ea", 00:18:36.887 "strip_size_kb": 64, 00:18:36.887 "state": "online", 00:18:36.887 "raid_level": "concat", 00:18:36.887 "superblock": true, 00:18:36.887 "num_base_bdevs": 3, 00:18:36.887 "num_base_bdevs_discovered": 3, 00:18:36.887 "num_base_bdevs_operational": 3, 00:18:36.887 "base_bdevs_list": [ 00:18:36.887 { 00:18:36.887 "name": "BaseBdev1", 00:18:36.887 "uuid": "3b4e948a-d82b-5103-a329-e2520bd669c3", 00:18:36.887 "is_configured": true, 00:18:36.887 "data_offset": 2048, 00:18:36.887 "data_size": 63488 00:18:36.887 }, 00:18:36.887 { 00:18:36.887 "name": "BaseBdev2", 00:18:36.887 "uuid": "8c25f55c-26e1-5feb-bff1-f0c66c59739c", 00:18:36.887 "is_configured": true, 00:18:36.887 "data_offset": 2048, 00:18:36.887 "data_size": 63488 00:18:36.887 }, 00:18:36.887 { 00:18:36.887 "name": "BaseBdev3", 00:18:36.887 "uuid": "af39e5df-e261-52de-a153-f5d2836f85f8", 00:18:36.887 "is_configured": true, 00:18:36.887 "data_offset": 2048, 00:18:36.887 "data_size": 63488 00:18:36.887 } 00:18:36.887 ] 00:18:36.887 }' 00:18:36.887 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.887 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.147 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:37.406 [2024-07-23 15:12:32.686412] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.406 [2024-07-23 15:12:32.686729] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.406 [2024-07-23 15:12:32.689296] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.406 0 00:18:37.406 [2024-07-23 15:12:32.689463] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.406 [2024-07-23 15:12:32.689513] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.406 [2024-07-23 15:12:32.689528] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 96047 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 96047 ']' 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 96047 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96047 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96047' 00:18:37.406 killing process with pid 96047 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 96047 00:18:37.406 [2024-07-23 15:12:32.751856] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.406 15:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 96047 00:18:37.406 [2024-07-23 15:12:32.777636] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.zpxX6iHcMe 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:37.674 ************************************ 00:18:37.674 END TEST raid_write_error_test 00:18:37.674 ************************************ 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.54 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.54 != \0\.\0\0 ]] 00:18:37.674 00:18:37.674 real 0m5.642s 00:18:37.674 user 0m8.356s 00:18:37.674 sys 0m1.057s 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.674 15:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.674 15:12:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:37.674 15:12:33 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:37.674 15:12:33 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:18:37.674 15:12:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:37.674 15:12:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.674 15:12:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.674 ************************************ 00:18:37.674 START TEST raid_state_function_test 00:18:37.674 ************************************ 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=96207 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 96207' 00:18:37.674 Process raid pid: 96207 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 96207 /var/tmp/spdk-raid.sock 00:18:37.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:37.674 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 96207 ']' 00:18:37.675 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:37.675 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.675 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:37.675 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.675 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.934 [2024-07-23 15:12:33.149960] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:18:37.934 [2024-07-23 15:12:33.150080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.934 [2024-07-23 15:12:33.293052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.934 [2024-07-23 15:12:33.340937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.191 [2024-07-23 15:12:33.386504] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.191 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.191 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:38.191 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:38.450 [2024-07-23 15:12:33.680610] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:38.450 [2024-07-23 15:12:33.680676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:38.450 [2024-07-23 15:12:33.680696] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.450 [2024-07-23 15:12:33.680711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.450 [2024-07-23 15:12:33.680723] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:38.450 [2024-07-23 15:12:33.680736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:38.450 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:38.450 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:38.450 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.450 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:38.451 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:38.451 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:38.451 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.451 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.451 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.451 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.451 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.451 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.709 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.709 "name": "Existed_Raid", 00:18:38.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.709 "strip_size_kb": 0, 00:18:38.709 "state": "configuring", 00:18:38.709 "raid_level": "raid1", 00:18:38.709 "superblock": false, 00:18:38.709 "num_base_bdevs": 3, 00:18:38.709 "num_base_bdevs_discovered": 0, 00:18:38.709 "num_base_bdevs_operational": 3, 00:18:38.709 "base_bdevs_list": [ 00:18:38.709 { 00:18:38.709 "name": "BaseBdev1", 00:18:38.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.709 "is_configured": false, 00:18:38.709 "data_offset": 0, 00:18:38.709 "data_size": 0 00:18:38.709 }, 00:18:38.709 { 00:18:38.709 "name": "BaseBdev2", 00:18:38.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.709 "is_configured": false, 00:18:38.709 "data_offset": 0, 00:18:38.709 "data_size": 0 00:18:38.709 }, 00:18:38.709 { 00:18:38.709 "name": "BaseBdev3", 00:18:38.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.709 "is_configured": false, 00:18:38.709 "data_offset": 0, 00:18:38.709 "data_size": 0 00:18:38.709 } 00:18:38.709 ] 00:18:38.709 }' 00:18:38.709 15:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.709 15:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.983 15:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:39.286 [2024-07-23 15:12:34.436638] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:39.286 [2024-07-23 15:12:34.436696] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:18:39.286 15:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:39.286 [2024-07-23 15:12:34.616724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.286 [2024-07-23 15:12:34.616803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.286 [2024-07-23 15:12:34.616814] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.286 [2024-07-23 15:12:34.616827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.286 [2024-07-23 15:12:34.616835] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:39.286 [2024-07-23 15:12:34.616847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:39.286 15:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:39.545 [2024-07-23 15:12:34.806643] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.545 BaseBdev1 00:18:39.545 15:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:39.545 15:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:39.545 15:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:39.545 15:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:39.545 15:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:39.545 15:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:39.545 15:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:39.804 [ 00:18:39.804 { 00:18:39.804 "name": "BaseBdev1", 00:18:39.804 "aliases": [ 00:18:39.804 "dc70e99e-8114-4301-b93b-4f4631306a8b" 00:18:39.804 ], 00:18:39.804 "product_name": "Malloc disk", 00:18:39.804 "block_size": 512, 00:18:39.804 "num_blocks": 65536, 00:18:39.804 "uuid": "dc70e99e-8114-4301-b93b-4f4631306a8b", 00:18:39.804 "assigned_rate_limits": { 00:18:39.804 "rw_ios_per_sec": 0, 00:18:39.804 "rw_mbytes_per_sec": 0, 00:18:39.804 "r_mbytes_per_sec": 0, 00:18:39.804 "w_mbytes_per_sec": 0 00:18:39.804 }, 00:18:39.804 "claimed": true, 00:18:39.804 "claim_type": "exclusive_write", 00:18:39.804 "zoned": false, 00:18:39.804 "supported_io_types": { 00:18:39.804 "read": true, 00:18:39.804 "write": true, 00:18:39.804 "unmap": true, 00:18:39.804 "flush": true, 00:18:39.804 "reset": true, 00:18:39.804 "nvme_admin": false, 00:18:39.804 "nvme_io": false, 00:18:39.804 "nvme_io_md": false, 00:18:39.804 "write_zeroes": true, 00:18:39.804 "zcopy": true, 00:18:39.804 "get_zone_info": false, 00:18:39.804 "zone_management": false, 00:18:39.804 "zone_append": false, 00:18:39.804 "compare": false, 00:18:39.804 "compare_and_write": false, 00:18:39.804 "abort": true, 00:18:39.804 "seek_hole": false, 00:18:39.804 "seek_data": false, 00:18:39.804 "copy": true, 00:18:39.804 "nvme_iov_md": false 00:18:39.804 }, 00:18:39.804 "memory_domains": [ 00:18:39.804 { 00:18:39.804 "dma_device_id": "system", 00:18:39.804 "dma_device_type": 1 00:18:39.804 }, 00:18:39.804 { 00:18:39.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.804 "dma_device_type": 2 00:18:39.804 } 00:18:39.804 ], 00:18:39.804 "driver_specific": {} 00:18:39.804 } 00:18:39.804 ] 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.804 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.064 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.064 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.064 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.064 "name": "Existed_Raid", 00:18:40.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.064 "strip_size_kb": 0, 00:18:40.064 "state": "configuring", 00:18:40.064 "raid_level": "raid1", 00:18:40.064 "superblock": false, 00:18:40.064 "num_base_bdevs": 3, 00:18:40.064 "num_base_bdevs_discovered": 1, 00:18:40.064 "num_base_bdevs_operational": 3, 00:18:40.064 "base_bdevs_list": [ 00:18:40.064 { 00:18:40.064 "name": "BaseBdev1", 00:18:40.064 "uuid": "dc70e99e-8114-4301-b93b-4f4631306a8b", 00:18:40.064 "is_configured": true, 00:18:40.064 "data_offset": 0, 00:18:40.064 "data_size": 65536 00:18:40.064 }, 00:18:40.064 { 00:18:40.064 "name": "BaseBdev2", 00:18:40.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.064 "is_configured": false, 00:18:40.064 "data_offset": 0, 00:18:40.064 "data_size": 0 00:18:40.064 }, 00:18:40.064 { 00:18:40.064 "name": "BaseBdev3", 00:18:40.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.064 "is_configured": false, 00:18:40.064 "data_offset": 0, 00:18:40.064 "data_size": 0 00:18:40.064 } 00:18:40.064 ] 00:18:40.064 }' 00:18:40.064 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.064 15:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.323 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:40.582 [2024-07-23 15:12:35.827010] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.582 [2024-07-23 15:12:35.827265] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:18:40.582 15:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:40.582 [2024-07-23 15:12:36.007122] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.582 [2024-07-23 15:12:36.009554] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.582 [2024-07-23 15:12:36.009718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.582 [2024-07-23 15:12:36.009830] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.582 [2024-07-23 15:12:36.009896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.841 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.841 "name": "Existed_Raid", 00:18:40.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.841 "strip_size_kb": 0, 00:18:40.841 "state": "configuring", 00:18:40.841 "raid_level": "raid1", 00:18:40.841 "superblock": false, 00:18:40.841 "num_base_bdevs": 3, 00:18:40.841 "num_base_bdevs_discovered": 1, 00:18:40.841 "num_base_bdevs_operational": 3, 00:18:40.841 "base_bdevs_list": [ 00:18:40.841 { 00:18:40.841 "name": "BaseBdev1", 00:18:40.841 "uuid": "dc70e99e-8114-4301-b93b-4f4631306a8b", 00:18:40.841 "is_configured": true, 00:18:40.841 "data_offset": 0, 00:18:40.841 "data_size": 65536 00:18:40.841 }, 00:18:40.841 { 00:18:40.841 "name": "BaseBdev2", 00:18:40.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.841 "is_configured": false, 00:18:40.841 "data_offset": 0, 00:18:40.841 "data_size": 0 00:18:40.841 }, 00:18:40.841 { 00:18:40.841 "name": "BaseBdev3", 00:18:40.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.842 "is_configured": false, 00:18:40.842 "data_offset": 0, 00:18:40.842 "data_size": 0 00:18:40.842 } 00:18:40.842 ] 00:18:40.842 }' 00:18:40.842 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.842 15:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.409 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:41.409 [2024-07-23 15:12:36.774466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.409 BaseBdev2 00:18:41.409 15:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:41.409 15:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:41.409 15:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:41.409 15:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:41.409 15:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:41.409 15:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:41.409 15:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:41.668 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:41.927 [ 00:18:41.927 { 00:18:41.927 "name": "BaseBdev2", 00:18:41.927 "aliases": [ 00:18:41.927 "d68d620f-2826-44fd-a8c7-8824953f37a5" 00:18:41.927 ], 00:18:41.927 "product_name": "Malloc disk", 00:18:41.927 "block_size": 512, 00:18:41.927 "num_blocks": 65536, 00:18:41.927 "uuid": "d68d620f-2826-44fd-a8c7-8824953f37a5", 00:18:41.927 "assigned_rate_limits": { 00:18:41.927 "rw_ios_per_sec": 0, 00:18:41.927 "rw_mbytes_per_sec": 0, 00:18:41.927 "r_mbytes_per_sec": 0, 00:18:41.927 "w_mbytes_per_sec": 0 00:18:41.927 }, 00:18:41.927 "claimed": true, 00:18:41.927 "claim_type": "exclusive_write", 00:18:41.927 "zoned": false, 00:18:41.927 "supported_io_types": { 00:18:41.927 "read": true, 00:18:41.927 "write": true, 00:18:41.927 "unmap": true, 00:18:41.927 "flush": true, 00:18:41.927 "reset": true, 00:18:41.927 "nvme_admin": false, 00:18:41.927 "nvme_io": false, 00:18:41.927 "nvme_io_md": false, 00:18:41.927 "write_zeroes": true, 00:18:41.927 "zcopy": true, 00:18:41.927 "get_zone_info": false, 00:18:41.927 "zone_management": false, 00:18:41.927 "zone_append": false, 00:18:41.927 "compare": false, 00:18:41.927 "compare_and_write": false, 00:18:41.927 "abort": true, 00:18:41.927 "seek_hole": false, 00:18:41.927 "seek_data": false, 00:18:41.927 "copy": true, 00:18:41.927 "nvme_iov_md": false 00:18:41.927 }, 00:18:41.927 "memory_domains": [ 00:18:41.927 { 00:18:41.927 "dma_device_id": "system", 00:18:41.927 "dma_device_type": 1 00:18:41.927 }, 00:18:41.927 { 00:18:41.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.927 "dma_device_type": 2 00:18:41.927 } 00:18:41.927 ], 00:18:41.927 "driver_specific": {} 00:18:41.927 } 00:18:41.927 ] 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.927 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.187 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.187 "name": "Existed_Raid", 00:18:42.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.187 "strip_size_kb": 0, 00:18:42.187 "state": "configuring", 00:18:42.187 "raid_level": "raid1", 00:18:42.187 "superblock": false, 00:18:42.187 "num_base_bdevs": 3, 00:18:42.187 "num_base_bdevs_discovered": 2, 00:18:42.187 "num_base_bdevs_operational": 3, 00:18:42.187 "base_bdevs_list": [ 00:18:42.187 { 00:18:42.187 "name": "BaseBdev1", 00:18:42.187 "uuid": "dc70e99e-8114-4301-b93b-4f4631306a8b", 00:18:42.187 "is_configured": true, 00:18:42.187 "data_offset": 0, 00:18:42.187 "data_size": 65536 00:18:42.187 }, 00:18:42.187 { 00:18:42.187 "name": "BaseBdev2", 00:18:42.187 "uuid": "d68d620f-2826-44fd-a8c7-8824953f37a5", 00:18:42.187 "is_configured": true, 00:18:42.187 "data_offset": 0, 00:18:42.187 "data_size": 65536 00:18:42.187 }, 00:18:42.187 { 00:18:42.187 "name": "BaseBdev3", 00:18:42.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.187 "is_configured": false, 00:18:42.187 "data_offset": 0, 00:18:42.187 "data_size": 0 00:18:42.187 } 00:18:42.187 ] 00:18:42.187 }' 00:18:42.187 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.187 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.446 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.705 [2024-07-23 15:12:37.978419] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:42.705 [2024-07-23 15:12:37.978673] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:18:42.705 [2024-07-23 15:12:37.978736] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:42.705 [2024-07-23 15:12:37.979068] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:18:42.705 [2024-07-23 15:12:37.979577] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:18:42.705 [2024-07-23 15:12:37.979703] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:18:42.705 [2024-07-23 15:12:37.980054] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.705 BaseBdev3 00:18:42.705 15:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:42.706 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:42.706 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:42.706 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:42.706 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:42.706 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:42.706 15:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.965 15:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:43.224 [ 00:18:43.224 { 00:18:43.224 "name": "BaseBdev3", 00:18:43.224 "aliases": [ 00:18:43.224 "16fdeb5d-d7c1-4483-b1c5-ae5030f8949e" 00:18:43.224 ], 00:18:43.224 "product_name": "Malloc disk", 00:18:43.224 "block_size": 512, 00:18:43.224 "num_blocks": 65536, 00:18:43.224 "uuid": "16fdeb5d-d7c1-4483-b1c5-ae5030f8949e", 00:18:43.224 "assigned_rate_limits": { 00:18:43.224 "rw_ios_per_sec": 0, 00:18:43.224 "rw_mbytes_per_sec": 0, 00:18:43.224 "r_mbytes_per_sec": 0, 00:18:43.224 "w_mbytes_per_sec": 0 00:18:43.224 }, 00:18:43.224 "claimed": true, 00:18:43.224 "claim_type": "exclusive_write", 00:18:43.224 "zoned": false, 00:18:43.224 "supported_io_types": { 00:18:43.224 "read": true, 00:18:43.224 "write": true, 00:18:43.224 "unmap": true, 00:18:43.224 "flush": true, 00:18:43.224 "reset": true, 00:18:43.224 "nvme_admin": false, 00:18:43.224 "nvme_io": false, 00:18:43.224 "nvme_io_md": false, 00:18:43.224 "write_zeroes": true, 00:18:43.224 "zcopy": true, 00:18:43.224 "get_zone_info": false, 00:18:43.224 "zone_management": false, 00:18:43.224 "zone_append": false, 00:18:43.224 "compare": false, 00:18:43.224 "compare_and_write": false, 00:18:43.224 "abort": true, 00:18:43.224 "seek_hole": false, 00:18:43.224 "seek_data": false, 00:18:43.224 "copy": true, 00:18:43.224 "nvme_iov_md": false 00:18:43.224 }, 00:18:43.224 "memory_domains": [ 00:18:43.224 { 00:18:43.224 "dma_device_id": "system", 00:18:43.224 "dma_device_type": 1 00:18:43.224 }, 00:18:43.224 { 00:18:43.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.224 "dma_device_type": 2 00:18:43.224 } 00:18:43.224 ], 00:18:43.224 "driver_specific": {} 00:18:43.224 } 00:18:43.224 ] 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.224 "name": "Existed_Raid", 00:18:43.224 "uuid": "c4a1747a-4c8e-4807-beb8-24b12842e56d", 00:18:43.224 "strip_size_kb": 0, 00:18:43.224 "state": "online", 00:18:43.224 "raid_level": "raid1", 00:18:43.224 "superblock": false, 00:18:43.224 "num_base_bdevs": 3, 00:18:43.224 "num_base_bdevs_discovered": 3, 00:18:43.224 "num_base_bdevs_operational": 3, 00:18:43.224 "base_bdevs_list": [ 00:18:43.224 { 00:18:43.224 "name": "BaseBdev1", 00:18:43.224 "uuid": "dc70e99e-8114-4301-b93b-4f4631306a8b", 00:18:43.224 "is_configured": true, 00:18:43.224 "data_offset": 0, 00:18:43.224 "data_size": 65536 00:18:43.224 }, 00:18:43.224 { 00:18:43.224 "name": "BaseBdev2", 00:18:43.224 "uuid": "d68d620f-2826-44fd-a8c7-8824953f37a5", 00:18:43.224 "is_configured": true, 00:18:43.224 "data_offset": 0, 00:18:43.224 "data_size": 65536 00:18:43.224 }, 00:18:43.224 { 00:18:43.224 "name": "BaseBdev3", 00:18:43.224 "uuid": "16fdeb5d-d7c1-4483-b1c5-ae5030f8949e", 00:18:43.224 "is_configured": true, 00:18:43.224 "data_offset": 0, 00:18:43.224 "data_size": 65536 00:18:43.224 } 00:18:43.224 ] 00:18:43.224 }' 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.224 15:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.792 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:43.792 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:43.792 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:43.792 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:43.792 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:43.792 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:43.792 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:43.792 15:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:43.792 [2024-07-23 15:12:39.083051] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.792 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:43.792 "name": "Existed_Raid", 00:18:43.792 "aliases": [ 00:18:43.792 "c4a1747a-4c8e-4807-beb8-24b12842e56d" 00:18:43.792 ], 00:18:43.792 "product_name": "Raid Volume", 00:18:43.792 "block_size": 512, 00:18:43.792 "num_blocks": 65536, 00:18:43.792 "uuid": "c4a1747a-4c8e-4807-beb8-24b12842e56d", 00:18:43.792 "assigned_rate_limits": { 00:18:43.792 "rw_ios_per_sec": 0, 00:18:43.792 "rw_mbytes_per_sec": 0, 00:18:43.792 "r_mbytes_per_sec": 0, 00:18:43.792 "w_mbytes_per_sec": 0 00:18:43.792 }, 00:18:43.792 "claimed": false, 00:18:43.792 "zoned": false, 00:18:43.792 "supported_io_types": { 00:18:43.792 "read": true, 00:18:43.792 "write": true, 00:18:43.792 "unmap": false, 00:18:43.792 "flush": false, 00:18:43.792 "reset": true, 00:18:43.792 "nvme_admin": false, 00:18:43.792 "nvme_io": false, 00:18:43.792 "nvme_io_md": false, 00:18:43.792 "write_zeroes": true, 00:18:43.792 "zcopy": false, 00:18:43.792 "get_zone_info": false, 00:18:43.792 "zone_management": false, 00:18:43.792 "zone_append": false, 00:18:43.792 "compare": false, 00:18:43.792 "compare_and_write": false, 00:18:43.792 "abort": false, 00:18:43.792 "seek_hole": false, 00:18:43.792 "seek_data": false, 00:18:43.792 "copy": false, 00:18:43.792 "nvme_iov_md": false 00:18:43.792 }, 00:18:43.792 "memory_domains": [ 00:18:43.792 { 00:18:43.792 "dma_device_id": "system", 00:18:43.792 "dma_device_type": 1 00:18:43.792 }, 00:18:43.792 { 00:18:43.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.792 "dma_device_type": 2 00:18:43.792 }, 00:18:43.792 { 00:18:43.792 "dma_device_id": "system", 00:18:43.792 "dma_device_type": 1 00:18:43.792 }, 00:18:43.792 { 00:18:43.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.792 "dma_device_type": 2 00:18:43.792 }, 00:18:43.792 { 00:18:43.792 "dma_device_id": "system", 00:18:43.792 "dma_device_type": 1 00:18:43.792 }, 00:18:43.792 { 00:18:43.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.792 "dma_device_type": 2 00:18:43.792 } 00:18:43.792 ], 00:18:43.792 "driver_specific": { 00:18:43.792 "raid": { 00:18:43.792 "uuid": "c4a1747a-4c8e-4807-beb8-24b12842e56d", 00:18:43.792 "strip_size_kb": 0, 00:18:43.792 "state": "online", 00:18:43.792 "raid_level": "raid1", 00:18:43.792 "superblock": false, 00:18:43.792 "num_base_bdevs": 3, 00:18:43.792 "num_base_bdevs_discovered": 3, 00:18:43.792 "num_base_bdevs_operational": 3, 00:18:43.792 "base_bdevs_list": [ 00:18:43.792 { 00:18:43.792 "name": "BaseBdev1", 00:18:43.792 "uuid": "dc70e99e-8114-4301-b93b-4f4631306a8b", 00:18:43.792 "is_configured": true, 00:18:43.792 "data_offset": 0, 00:18:43.792 "data_size": 65536 00:18:43.792 }, 00:18:43.792 { 00:18:43.792 "name": "BaseBdev2", 00:18:43.792 "uuid": "d68d620f-2826-44fd-a8c7-8824953f37a5", 00:18:43.792 "is_configured": true, 00:18:43.792 "data_offset": 0, 00:18:43.792 "data_size": 65536 00:18:43.792 }, 00:18:43.792 { 00:18:43.792 "name": "BaseBdev3", 00:18:43.792 "uuid": "16fdeb5d-d7c1-4483-b1c5-ae5030f8949e", 00:18:43.792 "is_configured": true, 00:18:43.792 "data_offset": 0, 00:18:43.792 "data_size": 65536 00:18:43.792 } 00:18:43.792 ] 00:18:43.792 } 00:18:43.792 } 00:18:43.792 }' 00:18:43.792 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:43.792 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:43.792 BaseBdev2 00:18:43.792 BaseBdev3' 00:18:43.792 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:43.792 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:43.792 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.052 "name": "BaseBdev1", 00:18:44.052 "aliases": [ 00:18:44.052 "dc70e99e-8114-4301-b93b-4f4631306a8b" 00:18:44.052 ], 00:18:44.052 "product_name": "Malloc disk", 00:18:44.052 "block_size": 512, 00:18:44.052 "num_blocks": 65536, 00:18:44.052 "uuid": "dc70e99e-8114-4301-b93b-4f4631306a8b", 00:18:44.052 "assigned_rate_limits": { 00:18:44.052 "rw_ios_per_sec": 0, 00:18:44.052 "rw_mbytes_per_sec": 0, 00:18:44.052 "r_mbytes_per_sec": 0, 00:18:44.052 "w_mbytes_per_sec": 0 00:18:44.052 }, 00:18:44.052 "claimed": true, 00:18:44.052 "claim_type": "exclusive_write", 00:18:44.052 "zoned": false, 00:18:44.052 "supported_io_types": { 00:18:44.052 "read": true, 00:18:44.052 "write": true, 00:18:44.052 "unmap": true, 00:18:44.052 "flush": true, 00:18:44.052 "reset": true, 00:18:44.052 "nvme_admin": false, 00:18:44.052 "nvme_io": false, 00:18:44.052 "nvme_io_md": false, 00:18:44.052 "write_zeroes": true, 00:18:44.052 "zcopy": true, 00:18:44.052 "get_zone_info": false, 00:18:44.052 "zone_management": false, 00:18:44.052 "zone_append": false, 00:18:44.052 "compare": false, 00:18:44.052 "compare_and_write": false, 00:18:44.052 "abort": true, 00:18:44.052 "seek_hole": false, 00:18:44.052 "seek_data": false, 00:18:44.052 "copy": true, 00:18:44.052 "nvme_iov_md": false 00:18:44.052 }, 00:18:44.052 "memory_domains": [ 00:18:44.052 { 00:18:44.052 "dma_device_id": "system", 00:18:44.052 "dma_device_type": 1 00:18:44.052 }, 00:18:44.052 { 00:18:44.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.052 "dma_device_type": 2 00:18:44.052 } 00:18:44.052 ], 00:18:44.052 "driver_specific": {} 00:18:44.052 }' 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:44.052 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.311 "name": "BaseBdev2", 00:18:44.311 "aliases": [ 00:18:44.311 "d68d620f-2826-44fd-a8c7-8824953f37a5" 00:18:44.311 ], 00:18:44.311 "product_name": "Malloc disk", 00:18:44.311 "block_size": 512, 00:18:44.311 "num_blocks": 65536, 00:18:44.311 "uuid": "d68d620f-2826-44fd-a8c7-8824953f37a5", 00:18:44.311 "assigned_rate_limits": { 00:18:44.311 "rw_ios_per_sec": 0, 00:18:44.311 "rw_mbytes_per_sec": 0, 00:18:44.311 "r_mbytes_per_sec": 0, 00:18:44.311 "w_mbytes_per_sec": 0 00:18:44.311 }, 00:18:44.311 "claimed": true, 00:18:44.311 "claim_type": "exclusive_write", 00:18:44.311 "zoned": false, 00:18:44.311 "supported_io_types": { 00:18:44.311 "read": true, 00:18:44.311 "write": true, 00:18:44.311 "unmap": true, 00:18:44.311 "flush": true, 00:18:44.311 "reset": true, 00:18:44.311 "nvme_admin": false, 00:18:44.311 "nvme_io": false, 00:18:44.311 "nvme_io_md": false, 00:18:44.311 "write_zeroes": true, 00:18:44.311 "zcopy": true, 00:18:44.311 "get_zone_info": false, 00:18:44.311 "zone_management": false, 00:18:44.311 "zone_append": false, 00:18:44.311 "compare": false, 00:18:44.311 "compare_and_write": false, 00:18:44.311 "abort": true, 00:18:44.311 "seek_hole": false, 00:18:44.311 "seek_data": false, 00:18:44.311 "copy": true, 00:18:44.311 "nvme_iov_md": false 00:18:44.311 }, 00:18:44.311 "memory_domains": [ 00:18:44.311 { 00:18:44.311 "dma_device_id": "system", 00:18:44.311 "dma_device_type": 1 00:18:44.311 }, 00:18:44.311 { 00:18:44.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.311 "dma_device_type": 2 00:18:44.311 } 00:18:44.311 ], 00:18:44.311 "driver_specific": {} 00:18:44.311 }' 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.311 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.312 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.312 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.312 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.312 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.312 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.312 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:44.312 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.571 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.571 "name": "BaseBdev3", 00:18:44.571 "aliases": [ 00:18:44.571 "16fdeb5d-d7c1-4483-b1c5-ae5030f8949e" 00:18:44.571 ], 00:18:44.571 "product_name": "Malloc disk", 00:18:44.571 "block_size": 512, 00:18:44.571 "num_blocks": 65536, 00:18:44.571 "uuid": "16fdeb5d-d7c1-4483-b1c5-ae5030f8949e", 00:18:44.571 "assigned_rate_limits": { 00:18:44.571 "rw_ios_per_sec": 0, 00:18:44.571 "rw_mbytes_per_sec": 0, 00:18:44.571 "r_mbytes_per_sec": 0, 00:18:44.571 "w_mbytes_per_sec": 0 00:18:44.571 }, 00:18:44.571 "claimed": true, 00:18:44.571 "claim_type": "exclusive_write", 00:18:44.571 "zoned": false, 00:18:44.571 "supported_io_types": { 00:18:44.571 "read": true, 00:18:44.571 "write": true, 00:18:44.571 "unmap": true, 00:18:44.571 "flush": true, 00:18:44.571 "reset": true, 00:18:44.571 "nvme_admin": false, 00:18:44.571 "nvme_io": false, 00:18:44.571 "nvme_io_md": false, 00:18:44.571 "write_zeroes": true, 00:18:44.571 "zcopy": true, 00:18:44.571 "get_zone_info": false, 00:18:44.571 "zone_management": false, 00:18:44.571 "zone_append": false, 00:18:44.571 "compare": false, 00:18:44.571 "compare_and_write": false, 00:18:44.571 "abort": true, 00:18:44.571 "seek_hole": false, 00:18:44.571 "seek_data": false, 00:18:44.571 "copy": true, 00:18:44.571 "nvme_iov_md": false 00:18:44.571 }, 00:18:44.571 "memory_domains": [ 00:18:44.571 { 00:18:44.571 "dma_device_id": "system", 00:18:44.571 "dma_device_type": 1 00:18:44.571 }, 00:18:44.571 { 00:18:44.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.571 "dma_device_type": 2 00:18:44.571 } 00:18:44.571 ], 00:18:44.571 "driver_specific": {} 00:18:44.571 }' 00:18:44.571 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.571 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.571 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.571 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.571 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.571 15:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.571 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:44.830 [2024-07-23 15:12:40.203087] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.830 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.089 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.089 "name": "Existed_Raid", 00:18:45.089 "uuid": "c4a1747a-4c8e-4807-beb8-24b12842e56d", 00:18:45.089 "strip_size_kb": 0, 00:18:45.089 "state": "online", 00:18:45.089 "raid_level": "raid1", 00:18:45.089 "superblock": false, 00:18:45.089 "num_base_bdevs": 3, 00:18:45.089 "num_base_bdevs_discovered": 2, 00:18:45.089 "num_base_bdevs_operational": 2, 00:18:45.089 "base_bdevs_list": [ 00:18:45.089 { 00:18:45.089 "name": null, 00:18:45.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.089 "is_configured": false, 00:18:45.089 "data_offset": 0, 00:18:45.089 "data_size": 65536 00:18:45.089 }, 00:18:45.089 { 00:18:45.089 "name": "BaseBdev2", 00:18:45.089 "uuid": "d68d620f-2826-44fd-a8c7-8824953f37a5", 00:18:45.089 "is_configured": true, 00:18:45.089 "data_offset": 0, 00:18:45.089 "data_size": 65536 00:18:45.089 }, 00:18:45.089 { 00:18:45.089 "name": "BaseBdev3", 00:18:45.089 "uuid": "16fdeb5d-d7c1-4483-b1c5-ae5030f8949e", 00:18:45.089 "is_configured": true, 00:18:45.089 "data_offset": 0, 00:18:45.089 "data_size": 65536 00:18:45.089 } 00:18:45.089 ] 00:18:45.089 }' 00:18:45.089 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.089 15:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.347 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:45.347 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:45.347 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:45.347 15:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.931 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:45.931 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.931 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:45.931 [2024-07-23 15:12:41.299977] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:45.931 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:45.931 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:45.931 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.931 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:46.190 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:46.190 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:46.190 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:46.449 [2024-07-23 15:12:41.744596] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:46.449 [2024-07-23 15:12:41.744707] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.449 [2024-07-23 15:12:41.757555] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.449 [2024-07-23 15:12:41.757863] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.449 [2024-07-23 15:12:41.757896] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:18:46.449 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:46.449 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:46.449 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:46.449 15:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.707 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:46.707 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:46.707 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:46.707 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:46.707 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:46.707 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:46.964 BaseBdev2 00:18:46.964 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:46.964 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:46.964 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:46.964 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:46.964 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:46.964 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:46.964 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.222 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:47.222 [ 00:18:47.222 { 00:18:47.222 "name": "BaseBdev2", 00:18:47.222 "aliases": [ 00:18:47.222 "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b" 00:18:47.222 ], 00:18:47.222 "product_name": "Malloc disk", 00:18:47.222 "block_size": 512, 00:18:47.222 "num_blocks": 65536, 00:18:47.222 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:47.222 "assigned_rate_limits": { 00:18:47.222 "rw_ios_per_sec": 0, 00:18:47.222 "rw_mbytes_per_sec": 0, 00:18:47.222 "r_mbytes_per_sec": 0, 00:18:47.222 "w_mbytes_per_sec": 0 00:18:47.222 }, 00:18:47.222 "claimed": false, 00:18:47.222 "zoned": false, 00:18:47.222 "supported_io_types": { 00:18:47.222 "read": true, 00:18:47.222 "write": true, 00:18:47.222 "unmap": true, 00:18:47.222 "flush": true, 00:18:47.222 "reset": true, 00:18:47.222 "nvme_admin": false, 00:18:47.222 "nvme_io": false, 00:18:47.222 "nvme_io_md": false, 00:18:47.222 "write_zeroes": true, 00:18:47.222 "zcopy": true, 00:18:47.222 "get_zone_info": false, 00:18:47.222 "zone_management": false, 00:18:47.222 "zone_append": false, 00:18:47.222 "compare": false, 00:18:47.222 "compare_and_write": false, 00:18:47.222 "abort": true, 00:18:47.222 "seek_hole": false, 00:18:47.222 "seek_data": false, 00:18:47.222 "copy": true, 00:18:47.222 "nvme_iov_md": false 00:18:47.222 }, 00:18:47.222 "memory_domains": [ 00:18:47.222 { 00:18:47.222 "dma_device_id": "system", 00:18:47.222 "dma_device_type": 1 00:18:47.222 }, 00:18:47.222 { 00:18:47.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.222 "dma_device_type": 2 00:18:47.222 } 00:18:47.222 ], 00:18:47.222 "driver_specific": {} 00:18:47.222 } 00:18:47.222 ] 00:18:47.222 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:47.222 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:47.222 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:47.222 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:47.482 BaseBdev3 00:18:47.482 15:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:47.482 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:47.482 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:47.482 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:47.482 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:47.482 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:47.482 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.742 15:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:47.742 [ 00:18:47.742 { 00:18:47.742 "name": "BaseBdev3", 00:18:47.742 "aliases": [ 00:18:47.742 "80f687ea-8a32-43f7-adb5-0620bb5fd38e" 00:18:47.742 ], 00:18:47.742 "product_name": "Malloc disk", 00:18:47.742 "block_size": 512, 00:18:47.742 "num_blocks": 65536, 00:18:47.742 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:47.742 "assigned_rate_limits": { 00:18:47.742 "rw_ios_per_sec": 0, 00:18:47.742 "rw_mbytes_per_sec": 0, 00:18:47.742 "r_mbytes_per_sec": 0, 00:18:47.742 "w_mbytes_per_sec": 0 00:18:47.742 }, 00:18:47.742 "claimed": false, 00:18:47.742 "zoned": false, 00:18:47.742 "supported_io_types": { 00:18:47.742 "read": true, 00:18:47.742 "write": true, 00:18:47.742 "unmap": true, 00:18:47.742 "flush": true, 00:18:47.742 "reset": true, 00:18:47.742 "nvme_admin": false, 00:18:47.742 "nvme_io": false, 00:18:47.742 "nvme_io_md": false, 00:18:47.742 "write_zeroes": true, 00:18:47.742 "zcopy": true, 00:18:47.742 "get_zone_info": false, 00:18:47.742 "zone_management": false, 00:18:47.742 "zone_append": false, 00:18:47.742 "compare": false, 00:18:47.742 "compare_and_write": false, 00:18:47.742 "abort": true, 00:18:47.742 "seek_hole": false, 00:18:47.742 "seek_data": false, 00:18:47.742 "copy": true, 00:18:47.742 "nvme_iov_md": false 00:18:47.742 }, 00:18:47.742 "memory_domains": [ 00:18:47.742 { 00:18:47.742 "dma_device_id": "system", 00:18:47.742 "dma_device_type": 1 00:18:47.742 }, 00:18:47.742 { 00:18:47.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.742 "dma_device_type": 2 00:18:47.742 } 00:18:47.742 ], 00:18:47.742 "driver_specific": {} 00:18:47.742 } 00:18:47.742 ] 00:18:47.742 15:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:47.742 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:47.742 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:47.742 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:48.000 [2024-07-23 15:12:43.308903] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:48.000 [2024-07-23 15:12:43.308987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:48.000 [2024-07-23 15:12:43.309043] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.000 [2024-07-23 15:12:43.311353] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.000 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.259 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.259 "name": "Existed_Raid", 00:18:48.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.259 "strip_size_kb": 0, 00:18:48.259 "state": "configuring", 00:18:48.259 "raid_level": "raid1", 00:18:48.259 "superblock": false, 00:18:48.259 "num_base_bdevs": 3, 00:18:48.259 "num_base_bdevs_discovered": 2, 00:18:48.259 "num_base_bdevs_operational": 3, 00:18:48.259 "base_bdevs_list": [ 00:18:48.259 { 00:18:48.259 "name": "BaseBdev1", 00:18:48.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.259 "is_configured": false, 00:18:48.259 "data_offset": 0, 00:18:48.259 "data_size": 0 00:18:48.259 }, 00:18:48.259 { 00:18:48.259 "name": "BaseBdev2", 00:18:48.259 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:48.259 "is_configured": true, 00:18:48.259 "data_offset": 0, 00:18:48.259 "data_size": 65536 00:18:48.259 }, 00:18:48.260 { 00:18:48.260 "name": "BaseBdev3", 00:18:48.260 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:48.260 "is_configured": true, 00:18:48.260 "data_offset": 0, 00:18:48.260 "data_size": 65536 00:18:48.260 } 00:18:48.260 ] 00:18:48.260 }' 00:18:48.260 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.260 15:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.518 15:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:48.776 [2024-07-23 15:12:44.065053] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.776 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.035 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.035 "name": "Existed_Raid", 00:18:49.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.035 "strip_size_kb": 0, 00:18:49.035 "state": "configuring", 00:18:49.035 "raid_level": "raid1", 00:18:49.035 "superblock": false, 00:18:49.035 "num_base_bdevs": 3, 00:18:49.035 "num_base_bdevs_discovered": 1, 00:18:49.035 "num_base_bdevs_operational": 3, 00:18:49.035 "base_bdevs_list": [ 00:18:49.035 { 00:18:49.035 "name": "BaseBdev1", 00:18:49.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.035 "is_configured": false, 00:18:49.035 "data_offset": 0, 00:18:49.035 "data_size": 0 00:18:49.035 }, 00:18:49.035 { 00:18:49.035 "name": null, 00:18:49.035 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:49.035 "is_configured": false, 00:18:49.035 "data_offset": 0, 00:18:49.035 "data_size": 65536 00:18:49.035 }, 00:18:49.035 { 00:18:49.035 "name": "BaseBdev3", 00:18:49.035 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:49.035 "is_configured": true, 00:18:49.035 "data_offset": 0, 00:18:49.035 "data_size": 65536 00:18:49.035 } 00:18:49.035 ] 00:18:49.035 }' 00:18:49.035 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.035 15:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.294 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.294 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:49.553 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:49.553 15:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:49.812 [2024-07-23 15:12:45.040734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.812 BaseBdev1 00:18:49.812 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:49.812 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:49.812 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:49.812 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:49.812 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:49.812 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:49.812 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:50.071 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:50.071 [ 00:18:50.071 { 00:18:50.071 "name": "BaseBdev1", 00:18:50.071 "aliases": [ 00:18:50.071 "f096077a-bb97-4f5e-a663-6b343ee309fd" 00:18:50.071 ], 00:18:50.071 "product_name": "Malloc disk", 00:18:50.071 "block_size": 512, 00:18:50.071 "num_blocks": 65536, 00:18:50.071 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:50.071 "assigned_rate_limits": { 00:18:50.071 "rw_ios_per_sec": 0, 00:18:50.071 "rw_mbytes_per_sec": 0, 00:18:50.071 "r_mbytes_per_sec": 0, 00:18:50.071 "w_mbytes_per_sec": 0 00:18:50.071 }, 00:18:50.071 "claimed": true, 00:18:50.071 "claim_type": "exclusive_write", 00:18:50.071 "zoned": false, 00:18:50.071 "supported_io_types": { 00:18:50.071 "read": true, 00:18:50.071 "write": true, 00:18:50.071 "unmap": true, 00:18:50.071 "flush": true, 00:18:50.071 "reset": true, 00:18:50.071 "nvme_admin": false, 00:18:50.071 "nvme_io": false, 00:18:50.071 "nvme_io_md": false, 00:18:50.071 "write_zeroes": true, 00:18:50.071 "zcopy": true, 00:18:50.071 "get_zone_info": false, 00:18:50.071 "zone_management": false, 00:18:50.071 "zone_append": false, 00:18:50.071 "compare": false, 00:18:50.071 "compare_and_write": false, 00:18:50.071 "abort": true, 00:18:50.071 "seek_hole": false, 00:18:50.071 "seek_data": false, 00:18:50.071 "copy": true, 00:18:50.071 "nvme_iov_md": false 00:18:50.071 }, 00:18:50.071 "memory_domains": [ 00:18:50.071 { 00:18:50.071 "dma_device_id": "system", 00:18:50.071 "dma_device_type": 1 00:18:50.071 }, 00:18:50.071 { 00:18:50.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.071 "dma_device_type": 2 00:18:50.071 } 00:18:50.071 ], 00:18:50.071 "driver_specific": {} 00:18:50.071 } 00:18:50.071 ] 00:18:50.071 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.072 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.462 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:50.462 "name": "Existed_Raid", 00:18:50.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.462 "strip_size_kb": 0, 00:18:50.462 "state": "configuring", 00:18:50.462 "raid_level": "raid1", 00:18:50.462 "superblock": false, 00:18:50.462 "num_base_bdevs": 3, 00:18:50.462 "num_base_bdevs_discovered": 2, 00:18:50.462 "num_base_bdevs_operational": 3, 00:18:50.462 "base_bdevs_list": [ 00:18:50.462 { 00:18:50.462 "name": "BaseBdev1", 00:18:50.462 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:50.462 "is_configured": true, 00:18:50.462 "data_offset": 0, 00:18:50.462 "data_size": 65536 00:18:50.462 }, 00:18:50.462 { 00:18:50.462 "name": null, 00:18:50.462 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:50.462 "is_configured": false, 00:18:50.462 "data_offset": 0, 00:18:50.462 "data_size": 65536 00:18:50.462 }, 00:18:50.462 { 00:18:50.462 "name": "BaseBdev3", 00:18:50.462 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:50.462 "is_configured": true, 00:18:50.462 "data_offset": 0, 00:18:50.462 "data_size": 65536 00:18:50.462 } 00:18:50.462 ] 00:18:50.462 }' 00:18:50.462 15:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:50.462 15:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.761 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.761 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:51.020 [2024-07-23 15:12:46.417148] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.020 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.280 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.280 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.280 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.280 "name": "Existed_Raid", 00:18:51.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.280 "strip_size_kb": 0, 00:18:51.280 "state": "configuring", 00:18:51.280 "raid_level": "raid1", 00:18:51.280 "superblock": false, 00:18:51.280 "num_base_bdevs": 3, 00:18:51.280 "num_base_bdevs_discovered": 1, 00:18:51.280 "num_base_bdevs_operational": 3, 00:18:51.280 "base_bdevs_list": [ 00:18:51.280 { 00:18:51.280 "name": "BaseBdev1", 00:18:51.280 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:51.280 "is_configured": true, 00:18:51.280 "data_offset": 0, 00:18:51.280 "data_size": 65536 00:18:51.280 }, 00:18:51.280 { 00:18:51.280 "name": null, 00:18:51.280 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:51.280 "is_configured": false, 00:18:51.280 "data_offset": 0, 00:18:51.280 "data_size": 65536 00:18:51.280 }, 00:18:51.280 { 00:18:51.280 "name": null, 00:18:51.280 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:51.280 "is_configured": false, 00:18:51.280 "data_offset": 0, 00:18:51.280 "data_size": 65536 00:18:51.280 } 00:18:51.280 ] 00:18:51.280 }' 00:18:51.280 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.280 15:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.539 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.539 15:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:51.798 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:51.798 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:52.057 [2024-07-23 15:12:47.378219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.058 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.317 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.317 "name": "Existed_Raid", 00:18:52.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.317 "strip_size_kb": 0, 00:18:52.317 "state": "configuring", 00:18:52.317 "raid_level": "raid1", 00:18:52.317 "superblock": false, 00:18:52.317 "num_base_bdevs": 3, 00:18:52.317 "num_base_bdevs_discovered": 2, 00:18:52.317 "num_base_bdevs_operational": 3, 00:18:52.317 "base_bdevs_list": [ 00:18:52.317 { 00:18:52.317 "name": "BaseBdev1", 00:18:52.317 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:52.317 "is_configured": true, 00:18:52.317 "data_offset": 0, 00:18:52.317 "data_size": 65536 00:18:52.317 }, 00:18:52.317 { 00:18:52.317 "name": null, 00:18:52.317 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:52.317 "is_configured": false, 00:18:52.317 "data_offset": 0, 00:18:52.317 "data_size": 65536 00:18:52.317 }, 00:18:52.317 { 00:18:52.317 "name": "BaseBdev3", 00:18:52.317 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:52.317 "is_configured": true, 00:18:52.317 "data_offset": 0, 00:18:52.317 "data_size": 65536 00:18:52.317 } 00:18:52.317 ] 00:18:52.317 }' 00:18:52.317 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.317 15:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.576 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.576 15:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:52.835 [2024-07-23 15:12:48.230381] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.835 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.094 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.094 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.094 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.094 "name": "Existed_Raid", 00:18:53.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.094 "strip_size_kb": 0, 00:18:53.094 "state": "configuring", 00:18:53.094 "raid_level": "raid1", 00:18:53.094 "superblock": false, 00:18:53.094 "num_base_bdevs": 3, 00:18:53.094 "num_base_bdevs_discovered": 1, 00:18:53.094 "num_base_bdevs_operational": 3, 00:18:53.094 "base_bdevs_list": [ 00:18:53.094 { 00:18:53.094 "name": null, 00:18:53.094 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:53.094 "is_configured": false, 00:18:53.094 "data_offset": 0, 00:18:53.094 "data_size": 65536 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "name": null, 00:18:53.094 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:53.094 "is_configured": false, 00:18:53.094 "data_offset": 0, 00:18:53.094 "data_size": 65536 00:18:53.094 }, 00:18:53.094 { 00:18:53.094 "name": "BaseBdev3", 00:18:53.094 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:53.094 "is_configured": true, 00:18:53.094 "data_offset": 0, 00:18:53.094 "data_size": 65536 00:18:53.094 } 00:18:53.094 ] 00:18:53.094 }' 00:18:53.094 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.094 15:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.660 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.660 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:53.660 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:53.660 15:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:53.919 [2024-07-23 15:12:49.203263] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.919 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.179 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.179 "name": "Existed_Raid", 00:18:54.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.179 "strip_size_kb": 0, 00:18:54.179 "state": "configuring", 00:18:54.179 "raid_level": "raid1", 00:18:54.179 "superblock": false, 00:18:54.179 "num_base_bdevs": 3, 00:18:54.179 "num_base_bdevs_discovered": 2, 00:18:54.179 "num_base_bdevs_operational": 3, 00:18:54.179 "base_bdevs_list": [ 00:18:54.179 { 00:18:54.179 "name": null, 00:18:54.179 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:54.179 "is_configured": false, 00:18:54.179 "data_offset": 0, 00:18:54.179 "data_size": 65536 00:18:54.179 }, 00:18:54.179 { 00:18:54.179 "name": "BaseBdev2", 00:18:54.179 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:54.179 "is_configured": true, 00:18:54.179 "data_offset": 0, 00:18:54.179 "data_size": 65536 00:18:54.179 }, 00:18:54.179 { 00:18:54.179 "name": "BaseBdev3", 00:18:54.179 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:54.179 "is_configured": true, 00:18:54.179 "data_offset": 0, 00:18:54.179 "data_size": 65536 00:18:54.179 } 00:18:54.179 ] 00:18:54.179 }' 00:18:54.179 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.179 15:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.438 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.438 15:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:54.697 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:54.697 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.697 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:54.957 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f096077a-bb97-4f5e-a663-6b343ee309fd 00:18:55.216 [2024-07-23 15:12:50.515029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:55.216 [2024-07-23 15:12:50.515088] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:18:55.216 [2024-07-23 15:12:50.515097] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:55.216 [2024-07-23 15:12:50.515185] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:18:55.216 [2024-07-23 15:12:50.515456] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:18:55.216 [2024-07-23 15:12:50.515471] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007880 00:18:55.216 [2024-07-23 15:12:50.515645] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.216 NewBaseBdev 00:18:55.216 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:55.216 15:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:55.216 15:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:55.216 15:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:55.216 15:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:55.216 15:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:55.216 15:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.475 15:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:55.734 [ 00:18:55.734 { 00:18:55.734 "name": "NewBaseBdev", 00:18:55.734 "aliases": [ 00:18:55.734 "f096077a-bb97-4f5e-a663-6b343ee309fd" 00:18:55.734 ], 00:18:55.734 "product_name": "Malloc disk", 00:18:55.734 "block_size": 512, 00:18:55.734 "num_blocks": 65536, 00:18:55.734 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:55.734 "assigned_rate_limits": { 00:18:55.734 "rw_ios_per_sec": 0, 00:18:55.734 "rw_mbytes_per_sec": 0, 00:18:55.734 "r_mbytes_per_sec": 0, 00:18:55.734 "w_mbytes_per_sec": 0 00:18:55.734 }, 00:18:55.734 "claimed": true, 00:18:55.734 "claim_type": "exclusive_write", 00:18:55.734 "zoned": false, 00:18:55.734 "supported_io_types": { 00:18:55.734 "read": true, 00:18:55.734 "write": true, 00:18:55.734 "unmap": true, 00:18:55.734 "flush": true, 00:18:55.734 "reset": true, 00:18:55.734 "nvme_admin": false, 00:18:55.734 "nvme_io": false, 00:18:55.734 "nvme_io_md": false, 00:18:55.734 "write_zeroes": true, 00:18:55.734 "zcopy": true, 00:18:55.734 "get_zone_info": false, 00:18:55.734 "zone_management": false, 00:18:55.734 "zone_append": false, 00:18:55.734 "compare": false, 00:18:55.734 "compare_and_write": false, 00:18:55.734 "abort": true, 00:18:55.734 "seek_hole": false, 00:18:55.734 "seek_data": false, 00:18:55.734 "copy": true, 00:18:55.734 "nvme_iov_md": false 00:18:55.734 }, 00:18:55.734 "memory_domains": [ 00:18:55.734 { 00:18:55.734 "dma_device_id": "system", 00:18:55.734 "dma_device_type": 1 00:18:55.734 }, 00:18:55.734 { 00:18:55.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.734 "dma_device_type": 2 00:18:55.735 } 00:18:55.735 ], 00:18:55.735 "driver_specific": {} 00:18:55.735 } 00:18:55.735 ] 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.735 15:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.995 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.995 "name": "Existed_Raid", 00:18:55.995 "uuid": "85b64d06-2645-4d5a-931d-3a68f718fa8f", 00:18:55.995 "strip_size_kb": 0, 00:18:55.995 "state": "online", 00:18:55.995 "raid_level": "raid1", 00:18:55.995 "superblock": false, 00:18:55.995 "num_base_bdevs": 3, 00:18:55.995 "num_base_bdevs_discovered": 3, 00:18:55.995 "num_base_bdevs_operational": 3, 00:18:55.995 "base_bdevs_list": [ 00:18:55.995 { 00:18:55.995 "name": "NewBaseBdev", 00:18:55.995 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:55.995 "is_configured": true, 00:18:55.995 "data_offset": 0, 00:18:55.995 "data_size": 65536 00:18:55.995 }, 00:18:55.995 { 00:18:55.995 "name": "BaseBdev2", 00:18:55.995 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:55.995 "is_configured": true, 00:18:55.995 "data_offset": 0, 00:18:55.995 "data_size": 65536 00:18:55.995 }, 00:18:55.995 { 00:18:55.995 "name": "BaseBdev3", 00:18:55.995 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:55.995 "is_configured": true, 00:18:55.995 "data_offset": 0, 00:18:55.995 "data_size": 65536 00:18:55.995 } 00:18:55.995 ] 00:18:55.995 }' 00:18:55.995 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.995 15:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.254 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:56.254 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:56.254 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:56.254 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:56.254 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:56.254 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:56.254 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:56.254 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:56.514 [2024-07-23 15:12:51.703674] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:56.514 "name": "Existed_Raid", 00:18:56.514 "aliases": [ 00:18:56.514 "85b64d06-2645-4d5a-931d-3a68f718fa8f" 00:18:56.514 ], 00:18:56.514 "product_name": "Raid Volume", 00:18:56.514 "block_size": 512, 00:18:56.514 "num_blocks": 65536, 00:18:56.514 "uuid": "85b64d06-2645-4d5a-931d-3a68f718fa8f", 00:18:56.514 "assigned_rate_limits": { 00:18:56.514 "rw_ios_per_sec": 0, 00:18:56.514 "rw_mbytes_per_sec": 0, 00:18:56.514 "r_mbytes_per_sec": 0, 00:18:56.514 "w_mbytes_per_sec": 0 00:18:56.514 }, 00:18:56.514 "claimed": false, 00:18:56.514 "zoned": false, 00:18:56.514 "supported_io_types": { 00:18:56.514 "read": true, 00:18:56.514 "write": true, 00:18:56.514 "unmap": false, 00:18:56.514 "flush": false, 00:18:56.514 "reset": true, 00:18:56.514 "nvme_admin": false, 00:18:56.514 "nvme_io": false, 00:18:56.514 "nvme_io_md": false, 00:18:56.514 "write_zeroes": true, 00:18:56.514 "zcopy": false, 00:18:56.514 "get_zone_info": false, 00:18:56.514 "zone_management": false, 00:18:56.514 "zone_append": false, 00:18:56.514 "compare": false, 00:18:56.514 "compare_and_write": false, 00:18:56.514 "abort": false, 00:18:56.514 "seek_hole": false, 00:18:56.514 "seek_data": false, 00:18:56.514 "copy": false, 00:18:56.514 "nvme_iov_md": false 00:18:56.514 }, 00:18:56.514 "memory_domains": [ 00:18:56.514 { 00:18:56.514 "dma_device_id": "system", 00:18:56.514 "dma_device_type": 1 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.514 "dma_device_type": 2 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "dma_device_id": "system", 00:18:56.514 "dma_device_type": 1 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.514 "dma_device_type": 2 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "dma_device_id": "system", 00:18:56.514 "dma_device_type": 1 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.514 "dma_device_type": 2 00:18:56.514 } 00:18:56.514 ], 00:18:56.514 "driver_specific": { 00:18:56.514 "raid": { 00:18:56.514 "uuid": "85b64d06-2645-4d5a-931d-3a68f718fa8f", 00:18:56.514 "strip_size_kb": 0, 00:18:56.514 "state": "online", 00:18:56.514 "raid_level": "raid1", 00:18:56.514 "superblock": false, 00:18:56.514 "num_base_bdevs": 3, 00:18:56.514 "num_base_bdevs_discovered": 3, 00:18:56.514 "num_base_bdevs_operational": 3, 00:18:56.514 "base_bdevs_list": [ 00:18:56.514 { 00:18:56.514 "name": "NewBaseBdev", 00:18:56.514 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:56.514 "is_configured": true, 00:18:56.514 "data_offset": 0, 00:18:56.514 "data_size": 65536 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "name": "BaseBdev2", 00:18:56.514 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:56.514 "is_configured": true, 00:18:56.514 "data_offset": 0, 00:18:56.514 "data_size": 65536 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "name": "BaseBdev3", 00:18:56.514 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:56.514 "is_configured": true, 00:18:56.514 "data_offset": 0, 00:18:56.514 "data_size": 65536 00:18:56.514 } 00:18:56.514 ] 00:18:56.514 } 00:18:56.514 } 00:18:56.514 }' 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:56.514 BaseBdev2 00:18:56.514 BaseBdev3' 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:56.514 "name": "NewBaseBdev", 00:18:56.514 "aliases": [ 00:18:56.514 "f096077a-bb97-4f5e-a663-6b343ee309fd" 00:18:56.514 ], 00:18:56.514 "product_name": "Malloc disk", 00:18:56.514 "block_size": 512, 00:18:56.514 "num_blocks": 65536, 00:18:56.514 "uuid": "f096077a-bb97-4f5e-a663-6b343ee309fd", 00:18:56.514 "assigned_rate_limits": { 00:18:56.514 "rw_ios_per_sec": 0, 00:18:56.514 "rw_mbytes_per_sec": 0, 00:18:56.514 "r_mbytes_per_sec": 0, 00:18:56.514 "w_mbytes_per_sec": 0 00:18:56.514 }, 00:18:56.514 "claimed": true, 00:18:56.514 "claim_type": "exclusive_write", 00:18:56.514 "zoned": false, 00:18:56.514 "supported_io_types": { 00:18:56.514 "read": true, 00:18:56.514 "write": true, 00:18:56.514 "unmap": true, 00:18:56.514 "flush": true, 00:18:56.514 "reset": true, 00:18:56.514 "nvme_admin": false, 00:18:56.514 "nvme_io": false, 00:18:56.514 "nvme_io_md": false, 00:18:56.514 "write_zeroes": true, 00:18:56.514 "zcopy": true, 00:18:56.514 "get_zone_info": false, 00:18:56.514 "zone_management": false, 00:18:56.514 "zone_append": false, 00:18:56.514 "compare": false, 00:18:56.514 "compare_and_write": false, 00:18:56.514 "abort": true, 00:18:56.514 "seek_hole": false, 00:18:56.514 "seek_data": false, 00:18:56.514 "copy": true, 00:18:56.514 "nvme_iov_md": false 00:18:56.514 }, 00:18:56.514 "memory_domains": [ 00:18:56.514 { 00:18:56.514 "dma_device_id": "system", 00:18:56.514 "dma_device_type": 1 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.514 "dma_device_type": 2 00:18:56.514 } 00:18:56.514 ], 00:18:56.514 "driver_specific": {} 00:18:56.514 }' 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:56.514 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.774 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.774 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:56.774 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.774 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.774 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:56.774 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.774 15:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.774 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:56.774 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:56.774 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:56.774 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:57.033 "name": "BaseBdev2", 00:18:57.033 "aliases": [ 00:18:57.033 "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b" 00:18:57.033 ], 00:18:57.033 "product_name": "Malloc disk", 00:18:57.033 "block_size": 512, 00:18:57.033 "num_blocks": 65536, 00:18:57.033 "uuid": "beb0bbc8-dbe9-4a3c-8280-3843b02bc23b", 00:18:57.033 "assigned_rate_limits": { 00:18:57.033 "rw_ios_per_sec": 0, 00:18:57.033 "rw_mbytes_per_sec": 0, 00:18:57.033 "r_mbytes_per_sec": 0, 00:18:57.033 "w_mbytes_per_sec": 0 00:18:57.033 }, 00:18:57.033 "claimed": true, 00:18:57.033 "claim_type": "exclusive_write", 00:18:57.033 "zoned": false, 00:18:57.033 "supported_io_types": { 00:18:57.033 "read": true, 00:18:57.033 "write": true, 00:18:57.033 "unmap": true, 00:18:57.033 "flush": true, 00:18:57.033 "reset": true, 00:18:57.033 "nvme_admin": false, 00:18:57.033 "nvme_io": false, 00:18:57.033 "nvme_io_md": false, 00:18:57.033 "write_zeroes": true, 00:18:57.033 "zcopy": true, 00:18:57.033 "get_zone_info": false, 00:18:57.033 "zone_management": false, 00:18:57.033 "zone_append": false, 00:18:57.033 "compare": false, 00:18:57.033 "compare_and_write": false, 00:18:57.033 "abort": true, 00:18:57.033 "seek_hole": false, 00:18:57.033 "seek_data": false, 00:18:57.033 "copy": true, 00:18:57.033 "nvme_iov_md": false 00:18:57.033 }, 00:18:57.033 "memory_domains": [ 00:18:57.033 { 00:18:57.033 "dma_device_id": "system", 00:18:57.033 "dma_device_type": 1 00:18:57.033 }, 00:18:57.033 { 00:18:57.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.033 "dma_device_type": 2 00:18:57.033 } 00:18:57.033 ], 00:18:57.033 "driver_specific": {} 00:18:57.033 }' 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:57.033 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:57.291 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:57.291 "name": "BaseBdev3", 00:18:57.291 "aliases": [ 00:18:57.291 "80f687ea-8a32-43f7-adb5-0620bb5fd38e" 00:18:57.291 ], 00:18:57.291 "product_name": "Malloc disk", 00:18:57.291 "block_size": 512, 00:18:57.291 "num_blocks": 65536, 00:18:57.291 "uuid": "80f687ea-8a32-43f7-adb5-0620bb5fd38e", 00:18:57.291 "assigned_rate_limits": { 00:18:57.291 "rw_ios_per_sec": 0, 00:18:57.291 "rw_mbytes_per_sec": 0, 00:18:57.291 "r_mbytes_per_sec": 0, 00:18:57.291 "w_mbytes_per_sec": 0 00:18:57.291 }, 00:18:57.291 "claimed": true, 00:18:57.291 "claim_type": "exclusive_write", 00:18:57.291 "zoned": false, 00:18:57.291 "supported_io_types": { 00:18:57.291 "read": true, 00:18:57.291 "write": true, 00:18:57.291 "unmap": true, 00:18:57.291 "flush": true, 00:18:57.291 "reset": true, 00:18:57.291 "nvme_admin": false, 00:18:57.291 "nvme_io": false, 00:18:57.291 "nvme_io_md": false, 00:18:57.291 "write_zeroes": true, 00:18:57.291 "zcopy": true, 00:18:57.291 "get_zone_info": false, 00:18:57.291 "zone_management": false, 00:18:57.291 "zone_append": false, 00:18:57.291 "compare": false, 00:18:57.291 "compare_and_write": false, 00:18:57.291 "abort": true, 00:18:57.291 "seek_hole": false, 00:18:57.291 "seek_data": false, 00:18:57.291 "copy": true, 00:18:57.291 "nvme_iov_md": false 00:18:57.291 }, 00:18:57.291 "memory_domains": [ 00:18:57.291 { 00:18:57.291 "dma_device_id": "system", 00:18:57.291 "dma_device_type": 1 00:18:57.291 }, 00:18:57.291 { 00:18:57.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.291 "dma_device_type": 2 00:18:57.291 } 00:18:57.291 ], 00:18:57.291 "driver_specific": {} 00:18:57.291 }' 00:18:57.291 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.291 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:57.292 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:57.550 [2024-07-23 15:12:52.959630] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.550 [2024-07-23 15:12:52.959673] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.550 [2024-07-23 15:12:52.959751] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.550 [2024-07-23 15:12:52.960026] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.550 [2024-07-23 15:12:52.960040] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name Existed_Raid, state offline 00:18:57.550 15:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 96207 00:18:57.550 15:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 96207 ']' 00:18:57.550 15:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 96207 00:18:57.809 15:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:57.809 15:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.809 15:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96207 00:18:57.809 15:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:57.809 15:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:57.809 15:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96207' 00:18:57.809 killing process with pid 96207 00:18:57.809 15:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 96207 00:18:57.809 [2024-07-23 15:12:53.016637] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.809 15:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 96207 00:18:57.809 [2024-07-23 15:12:53.052441] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:58.068 00:18:58.068 real 0m20.209s 00:18:58.068 user 0m35.578s 00:18:58.068 sys 0m4.431s 00:18:58.068 ************************************ 00:18:58.068 END TEST raid_state_function_test 00:18:58.068 ************************************ 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.068 15:12:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:58.068 15:12:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:18:58.068 15:12:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:58.068 15:12:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.068 15:12:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.068 ************************************ 00:18:58.068 START TEST raid_state_function_test_sb 00:18:58.068 ************************************ 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=97050 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 97050' 00:18:58.068 Process raid pid: 97050 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 97050 /var/tmp/spdk-raid.sock 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 97050 ']' 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.068 15:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.068 [2024-07-23 15:12:53.439301] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:18:58.068 [2024-07-23 15:12:53.439494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.327 [2024-07-23 15:12:53.592952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.327 [2024-07-23 15:12:53.641910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.327 [2024-07-23 15:12:53.687422] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.263 15:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.263 15:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:59.263 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:59.263 [2024-07-23 15:12:54.513581] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.263 [2024-07-23 15:12:54.513642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.263 [2024-07-23 15:12:54.513654] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.263 [2024-07-23 15:12:54.513668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.263 [2024-07-23 15:12:54.513680] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.263 [2024-07-23 15:12:54.513694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.263 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:59.263 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:59.263 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.264 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.522 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.522 "name": "Existed_Raid", 00:18:59.522 "uuid": "308f4dc9-4a53-4564-a2af-ac46c272f3ed", 00:18:59.522 "strip_size_kb": 0, 00:18:59.522 "state": "configuring", 00:18:59.522 "raid_level": "raid1", 00:18:59.522 "superblock": true, 00:18:59.522 "num_base_bdevs": 3, 00:18:59.522 "num_base_bdevs_discovered": 0, 00:18:59.522 "num_base_bdevs_operational": 3, 00:18:59.522 "base_bdevs_list": [ 00:18:59.522 { 00:18:59.522 "name": "BaseBdev1", 00:18:59.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.522 "is_configured": false, 00:18:59.522 "data_offset": 0, 00:18:59.522 "data_size": 0 00:18:59.522 }, 00:18:59.522 { 00:18:59.522 "name": "BaseBdev2", 00:18:59.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.522 "is_configured": false, 00:18:59.522 "data_offset": 0, 00:18:59.522 "data_size": 0 00:18:59.522 }, 00:18:59.522 { 00:18:59.522 "name": "BaseBdev3", 00:18:59.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.522 "is_configured": false, 00:18:59.522 "data_offset": 0, 00:18:59.522 "data_size": 0 00:18:59.522 } 00:18:59.522 ] 00:18:59.522 }' 00:18:59.522 15:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.522 15:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.821 15:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:00.082 [2024-07-23 15:12:55.277578] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.082 [2024-07-23 15:12:55.277639] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:19:00.082 15:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:00.082 [2024-07-23 15:12:55.461669] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.082 [2024-07-23 15:12:55.461730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.082 [2024-07-23 15:12:55.461742] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.082 [2024-07-23 15:12:55.461755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.082 [2024-07-23 15:12:55.461762] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.082 [2024-07-23 15:12:55.461775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.082 15:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:00.340 [2024-07-23 15:12:55.707517] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.340 BaseBdev1 00:19:00.340 15:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:00.340 15:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:00.340 15:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:00.340 15:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:00.340 15:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:00.340 15:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:00.340 15:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.597 15:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.855 [ 00:19:00.855 { 00:19:00.855 "name": "BaseBdev1", 00:19:00.855 "aliases": [ 00:19:00.855 "8e6299c0-5735-4d49-be92-f2027e5f4e02" 00:19:00.855 ], 00:19:00.855 "product_name": "Malloc disk", 00:19:00.855 "block_size": 512, 00:19:00.855 "num_blocks": 65536, 00:19:00.855 "uuid": "8e6299c0-5735-4d49-be92-f2027e5f4e02", 00:19:00.855 "assigned_rate_limits": { 00:19:00.855 "rw_ios_per_sec": 0, 00:19:00.855 "rw_mbytes_per_sec": 0, 00:19:00.855 "r_mbytes_per_sec": 0, 00:19:00.855 "w_mbytes_per_sec": 0 00:19:00.855 }, 00:19:00.855 "claimed": true, 00:19:00.855 "claim_type": "exclusive_write", 00:19:00.855 "zoned": false, 00:19:00.855 "supported_io_types": { 00:19:00.855 "read": true, 00:19:00.855 "write": true, 00:19:00.855 "unmap": true, 00:19:00.855 "flush": true, 00:19:00.855 "reset": true, 00:19:00.855 "nvme_admin": false, 00:19:00.855 "nvme_io": false, 00:19:00.855 "nvme_io_md": false, 00:19:00.855 "write_zeroes": true, 00:19:00.855 "zcopy": true, 00:19:00.855 "get_zone_info": false, 00:19:00.855 "zone_management": false, 00:19:00.855 "zone_append": false, 00:19:00.855 "compare": false, 00:19:00.855 "compare_and_write": false, 00:19:00.855 "abort": true, 00:19:00.855 "seek_hole": false, 00:19:00.855 "seek_data": false, 00:19:00.855 "copy": true, 00:19:00.855 "nvme_iov_md": false 00:19:00.855 }, 00:19:00.855 "memory_domains": [ 00:19:00.855 { 00:19:00.855 "dma_device_id": "system", 00:19:00.855 "dma_device_type": 1 00:19:00.855 }, 00:19:00.855 { 00:19:00.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.855 "dma_device_type": 2 00:19:00.855 } 00:19:00.855 ], 00:19:00.855 "driver_specific": {} 00:19:00.855 } 00:19:00.855 ] 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.855 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.113 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.113 "name": "Existed_Raid", 00:19:01.113 "uuid": "07058742-7944-4786-8086-125ac30aca3d", 00:19:01.113 "strip_size_kb": 0, 00:19:01.113 "state": "configuring", 00:19:01.113 "raid_level": "raid1", 00:19:01.113 "superblock": true, 00:19:01.113 "num_base_bdevs": 3, 00:19:01.113 "num_base_bdevs_discovered": 1, 00:19:01.113 "num_base_bdevs_operational": 3, 00:19:01.113 "base_bdevs_list": [ 00:19:01.113 { 00:19:01.113 "name": "BaseBdev1", 00:19:01.113 "uuid": "8e6299c0-5735-4d49-be92-f2027e5f4e02", 00:19:01.113 "is_configured": true, 00:19:01.113 "data_offset": 2048, 00:19:01.113 "data_size": 63488 00:19:01.113 }, 00:19:01.113 { 00:19:01.113 "name": "BaseBdev2", 00:19:01.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.113 "is_configured": false, 00:19:01.113 "data_offset": 0, 00:19:01.113 "data_size": 0 00:19:01.113 }, 00:19:01.113 { 00:19:01.113 "name": "BaseBdev3", 00:19:01.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.113 "is_configured": false, 00:19:01.113 "data_offset": 0, 00:19:01.113 "data_size": 0 00:19:01.113 } 00:19:01.113 ] 00:19:01.113 }' 00:19:01.113 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.113 15:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.371 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.630 [2024-07-23 15:12:56.811830] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.630 [2024-07-23 15:12:56.811898] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:19:01.630 15:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:01.630 [2024-07-23 15:12:56.995957] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.630 [2024-07-23 15:12:56.998164] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.630 [2024-07-23 15:12:56.998212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.630 [2024-07-23 15:12:56.998224] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.630 [2024-07-23 15:12:56.998237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.630 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.889 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.889 "name": "Existed_Raid", 00:19:01.889 "uuid": "c27ffcca-ecdd-414d-8b8b-f5fe64c671ee", 00:19:01.889 "strip_size_kb": 0, 00:19:01.889 "state": "configuring", 00:19:01.889 "raid_level": "raid1", 00:19:01.889 "superblock": true, 00:19:01.889 "num_base_bdevs": 3, 00:19:01.889 "num_base_bdevs_discovered": 1, 00:19:01.889 "num_base_bdevs_operational": 3, 00:19:01.889 "base_bdevs_list": [ 00:19:01.889 { 00:19:01.889 "name": "BaseBdev1", 00:19:01.889 "uuid": "8e6299c0-5735-4d49-be92-f2027e5f4e02", 00:19:01.889 "is_configured": true, 00:19:01.889 "data_offset": 2048, 00:19:01.889 "data_size": 63488 00:19:01.889 }, 00:19:01.889 { 00:19:01.889 "name": "BaseBdev2", 00:19:01.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.889 "is_configured": false, 00:19:01.889 "data_offset": 0, 00:19:01.889 "data_size": 0 00:19:01.889 }, 00:19:01.889 { 00:19:01.889 "name": "BaseBdev3", 00:19:01.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.889 "is_configured": false, 00:19:01.889 "data_offset": 0, 00:19:01.889 "data_size": 0 00:19:01.889 } 00:19:01.889 ] 00:19:01.889 }' 00:19:01.889 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.889 15:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.148 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:02.408 [2024-07-23 15:12:57.784443] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.408 BaseBdev2 00:19:02.408 15:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:02.408 15:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:02.408 15:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:02.408 15:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:02.408 15:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:02.408 15:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:02.408 15:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.667 15:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:02.926 [ 00:19:02.926 { 00:19:02.926 "name": "BaseBdev2", 00:19:02.926 "aliases": [ 00:19:02.926 "fc20a5ef-4dd2-4775-a2da-e8306ddc330e" 00:19:02.926 ], 00:19:02.926 "product_name": "Malloc disk", 00:19:02.926 "block_size": 512, 00:19:02.926 "num_blocks": 65536, 00:19:02.926 "uuid": "fc20a5ef-4dd2-4775-a2da-e8306ddc330e", 00:19:02.926 "assigned_rate_limits": { 00:19:02.926 "rw_ios_per_sec": 0, 00:19:02.926 "rw_mbytes_per_sec": 0, 00:19:02.926 "r_mbytes_per_sec": 0, 00:19:02.926 "w_mbytes_per_sec": 0 00:19:02.926 }, 00:19:02.926 "claimed": true, 00:19:02.926 "claim_type": "exclusive_write", 00:19:02.926 "zoned": false, 00:19:02.926 "supported_io_types": { 00:19:02.926 "read": true, 00:19:02.926 "write": true, 00:19:02.926 "unmap": true, 00:19:02.926 "flush": true, 00:19:02.926 "reset": true, 00:19:02.926 "nvme_admin": false, 00:19:02.926 "nvme_io": false, 00:19:02.926 "nvme_io_md": false, 00:19:02.926 "write_zeroes": true, 00:19:02.926 "zcopy": true, 00:19:02.926 "get_zone_info": false, 00:19:02.926 "zone_management": false, 00:19:02.926 "zone_append": false, 00:19:02.926 "compare": false, 00:19:02.926 "compare_and_write": false, 00:19:02.926 "abort": true, 00:19:02.926 "seek_hole": false, 00:19:02.926 "seek_data": false, 00:19:02.926 "copy": true, 00:19:02.926 "nvme_iov_md": false 00:19:02.926 }, 00:19:02.926 "memory_domains": [ 00:19:02.926 { 00:19:02.926 "dma_device_id": "system", 00:19:02.926 "dma_device_type": 1 00:19:02.926 }, 00:19:02.926 { 00:19:02.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.926 "dma_device_type": 2 00:19:02.926 } 00:19:02.926 ], 00:19:02.926 "driver_specific": {} 00:19:02.926 } 00:19:02.926 ] 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.926 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.185 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.185 "name": "Existed_Raid", 00:19:03.185 "uuid": "c27ffcca-ecdd-414d-8b8b-f5fe64c671ee", 00:19:03.185 "strip_size_kb": 0, 00:19:03.185 "state": "configuring", 00:19:03.185 "raid_level": "raid1", 00:19:03.185 "superblock": true, 00:19:03.185 "num_base_bdevs": 3, 00:19:03.185 "num_base_bdevs_discovered": 2, 00:19:03.185 "num_base_bdevs_operational": 3, 00:19:03.185 "base_bdevs_list": [ 00:19:03.185 { 00:19:03.185 "name": "BaseBdev1", 00:19:03.185 "uuid": "8e6299c0-5735-4d49-be92-f2027e5f4e02", 00:19:03.185 "is_configured": true, 00:19:03.185 "data_offset": 2048, 00:19:03.185 "data_size": 63488 00:19:03.185 }, 00:19:03.185 { 00:19:03.185 "name": "BaseBdev2", 00:19:03.185 "uuid": "fc20a5ef-4dd2-4775-a2da-e8306ddc330e", 00:19:03.185 "is_configured": true, 00:19:03.185 "data_offset": 2048, 00:19:03.185 "data_size": 63488 00:19:03.185 }, 00:19:03.185 { 00:19:03.185 "name": "BaseBdev3", 00:19:03.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.185 "is_configured": false, 00:19:03.185 "data_offset": 0, 00:19:03.185 "data_size": 0 00:19:03.185 } 00:19:03.185 ] 00:19:03.185 }' 00:19:03.185 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.185 15:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.445 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:03.445 [2024-07-23 15:12:58.812234] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.445 [2024-07-23 15:12:58.812452] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:19:03.445 [2024-07-23 15:12:58.812480] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:03.445 [2024-07-23 15:12:58.812593] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:19:03.445 [2024-07-23 15:12:58.812951] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:19:03.445 [2024-07-23 15:12:58.812974] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:19:03.445 [2024-07-23 15:12:58.813090] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.445 BaseBdev3 00:19:03.445 15:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:03.445 15:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:03.445 15:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.445 15:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:03.445 15:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.445 15:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.445 15:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.704 15:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:03.963 [ 00:19:03.963 { 00:19:03.963 "name": "BaseBdev3", 00:19:03.964 "aliases": [ 00:19:03.964 "a2733b14-5844-4a6e-938f-e7d4c0b3ca60" 00:19:03.964 ], 00:19:03.964 "product_name": "Malloc disk", 00:19:03.964 "block_size": 512, 00:19:03.964 "num_blocks": 65536, 00:19:03.964 "uuid": "a2733b14-5844-4a6e-938f-e7d4c0b3ca60", 00:19:03.964 "assigned_rate_limits": { 00:19:03.964 "rw_ios_per_sec": 0, 00:19:03.964 "rw_mbytes_per_sec": 0, 00:19:03.964 "r_mbytes_per_sec": 0, 00:19:03.964 "w_mbytes_per_sec": 0 00:19:03.964 }, 00:19:03.964 "claimed": true, 00:19:03.964 "claim_type": "exclusive_write", 00:19:03.964 "zoned": false, 00:19:03.964 "supported_io_types": { 00:19:03.964 "read": true, 00:19:03.964 "write": true, 00:19:03.964 "unmap": true, 00:19:03.964 "flush": true, 00:19:03.964 "reset": true, 00:19:03.964 "nvme_admin": false, 00:19:03.964 "nvme_io": false, 00:19:03.964 "nvme_io_md": false, 00:19:03.964 "write_zeroes": true, 00:19:03.964 "zcopy": true, 00:19:03.964 "get_zone_info": false, 00:19:03.964 "zone_management": false, 00:19:03.964 "zone_append": false, 00:19:03.964 "compare": false, 00:19:03.964 "compare_and_write": false, 00:19:03.964 "abort": true, 00:19:03.964 "seek_hole": false, 00:19:03.964 "seek_data": false, 00:19:03.964 "copy": true, 00:19:03.964 "nvme_iov_md": false 00:19:03.964 }, 00:19:03.964 "memory_domains": [ 00:19:03.964 { 00:19:03.964 "dma_device_id": "system", 00:19:03.964 "dma_device_type": 1 00:19:03.964 }, 00:19:03.964 { 00:19:03.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.964 "dma_device_type": 2 00:19:03.964 } 00:19:03.964 ], 00:19:03.964 "driver_specific": {} 00:19:03.964 } 00:19:03.964 ] 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.964 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.223 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.223 "name": "Existed_Raid", 00:19:04.223 "uuid": "c27ffcca-ecdd-414d-8b8b-f5fe64c671ee", 00:19:04.223 "strip_size_kb": 0, 00:19:04.223 "state": "online", 00:19:04.223 "raid_level": "raid1", 00:19:04.223 "superblock": true, 00:19:04.223 "num_base_bdevs": 3, 00:19:04.223 "num_base_bdevs_discovered": 3, 00:19:04.223 "num_base_bdevs_operational": 3, 00:19:04.223 "base_bdevs_list": [ 00:19:04.223 { 00:19:04.223 "name": "BaseBdev1", 00:19:04.223 "uuid": "8e6299c0-5735-4d49-be92-f2027e5f4e02", 00:19:04.224 "is_configured": true, 00:19:04.224 "data_offset": 2048, 00:19:04.224 "data_size": 63488 00:19:04.224 }, 00:19:04.224 { 00:19:04.224 "name": "BaseBdev2", 00:19:04.224 "uuid": "fc20a5ef-4dd2-4775-a2da-e8306ddc330e", 00:19:04.224 "is_configured": true, 00:19:04.224 "data_offset": 2048, 00:19:04.224 "data_size": 63488 00:19:04.224 }, 00:19:04.224 { 00:19:04.224 "name": "BaseBdev3", 00:19:04.224 "uuid": "a2733b14-5844-4a6e-938f-e7d4c0b3ca60", 00:19:04.224 "is_configured": true, 00:19:04.224 "data_offset": 2048, 00:19:04.224 "data_size": 63488 00:19:04.224 } 00:19:04.224 ] 00:19:04.224 }' 00:19:04.224 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.224 15:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.483 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:04.483 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:04.483 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:04.483 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:04.483 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:04.483 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:04.483 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:04.483 15:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:04.742 [2024-07-23 15:12:59.992865] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.742 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:04.742 "name": "Existed_Raid", 00:19:04.742 "aliases": [ 00:19:04.742 "c27ffcca-ecdd-414d-8b8b-f5fe64c671ee" 00:19:04.742 ], 00:19:04.742 "product_name": "Raid Volume", 00:19:04.742 "block_size": 512, 00:19:04.742 "num_blocks": 63488, 00:19:04.742 "uuid": "c27ffcca-ecdd-414d-8b8b-f5fe64c671ee", 00:19:04.742 "assigned_rate_limits": { 00:19:04.742 "rw_ios_per_sec": 0, 00:19:04.742 "rw_mbytes_per_sec": 0, 00:19:04.742 "r_mbytes_per_sec": 0, 00:19:04.742 "w_mbytes_per_sec": 0 00:19:04.742 }, 00:19:04.742 "claimed": false, 00:19:04.742 "zoned": false, 00:19:04.742 "supported_io_types": { 00:19:04.742 "read": true, 00:19:04.742 "write": true, 00:19:04.742 "unmap": false, 00:19:04.742 "flush": false, 00:19:04.742 "reset": true, 00:19:04.742 "nvme_admin": false, 00:19:04.742 "nvme_io": false, 00:19:04.742 "nvme_io_md": false, 00:19:04.742 "write_zeroes": true, 00:19:04.742 "zcopy": false, 00:19:04.742 "get_zone_info": false, 00:19:04.742 "zone_management": false, 00:19:04.742 "zone_append": false, 00:19:04.742 "compare": false, 00:19:04.742 "compare_and_write": false, 00:19:04.742 "abort": false, 00:19:04.742 "seek_hole": false, 00:19:04.742 "seek_data": false, 00:19:04.742 "copy": false, 00:19:04.742 "nvme_iov_md": false 00:19:04.742 }, 00:19:04.742 "memory_domains": [ 00:19:04.742 { 00:19:04.742 "dma_device_id": "system", 00:19:04.742 "dma_device_type": 1 00:19:04.742 }, 00:19:04.742 { 00:19:04.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.742 "dma_device_type": 2 00:19:04.742 }, 00:19:04.742 { 00:19:04.742 "dma_device_id": "system", 00:19:04.742 "dma_device_type": 1 00:19:04.742 }, 00:19:04.742 { 00:19:04.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.742 "dma_device_type": 2 00:19:04.742 }, 00:19:04.742 { 00:19:04.742 "dma_device_id": "system", 00:19:04.742 "dma_device_type": 1 00:19:04.742 }, 00:19:04.742 { 00:19:04.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.742 "dma_device_type": 2 00:19:04.742 } 00:19:04.742 ], 00:19:04.742 "driver_specific": { 00:19:04.742 "raid": { 00:19:04.742 "uuid": "c27ffcca-ecdd-414d-8b8b-f5fe64c671ee", 00:19:04.742 "strip_size_kb": 0, 00:19:04.742 "state": "online", 00:19:04.742 "raid_level": "raid1", 00:19:04.742 "superblock": true, 00:19:04.742 "num_base_bdevs": 3, 00:19:04.742 "num_base_bdevs_discovered": 3, 00:19:04.742 "num_base_bdevs_operational": 3, 00:19:04.742 "base_bdevs_list": [ 00:19:04.742 { 00:19:04.742 "name": "BaseBdev1", 00:19:04.742 "uuid": "8e6299c0-5735-4d49-be92-f2027e5f4e02", 00:19:04.742 "is_configured": true, 00:19:04.742 "data_offset": 2048, 00:19:04.742 "data_size": 63488 00:19:04.743 }, 00:19:04.743 { 00:19:04.743 "name": "BaseBdev2", 00:19:04.743 "uuid": "fc20a5ef-4dd2-4775-a2da-e8306ddc330e", 00:19:04.743 "is_configured": true, 00:19:04.743 "data_offset": 2048, 00:19:04.743 "data_size": 63488 00:19:04.743 }, 00:19:04.743 { 00:19:04.743 "name": "BaseBdev3", 00:19:04.743 "uuid": "a2733b14-5844-4a6e-938f-e7d4c0b3ca60", 00:19:04.743 "is_configured": true, 00:19:04.743 "data_offset": 2048, 00:19:04.743 "data_size": 63488 00:19:04.743 } 00:19:04.743 ] 00:19:04.743 } 00:19:04.743 } 00:19:04.743 }' 00:19:04.743 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.743 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:04.743 BaseBdev2 00:19:04.743 BaseBdev3' 00:19:04.743 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.743 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:04.743 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.002 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.002 "name": "BaseBdev1", 00:19:05.002 "aliases": [ 00:19:05.002 "8e6299c0-5735-4d49-be92-f2027e5f4e02" 00:19:05.002 ], 00:19:05.002 "product_name": "Malloc disk", 00:19:05.002 "block_size": 512, 00:19:05.002 "num_blocks": 65536, 00:19:05.002 "uuid": "8e6299c0-5735-4d49-be92-f2027e5f4e02", 00:19:05.002 "assigned_rate_limits": { 00:19:05.002 "rw_ios_per_sec": 0, 00:19:05.002 "rw_mbytes_per_sec": 0, 00:19:05.002 "r_mbytes_per_sec": 0, 00:19:05.002 "w_mbytes_per_sec": 0 00:19:05.002 }, 00:19:05.002 "claimed": true, 00:19:05.002 "claim_type": "exclusive_write", 00:19:05.002 "zoned": false, 00:19:05.002 "supported_io_types": { 00:19:05.002 "read": true, 00:19:05.002 "write": true, 00:19:05.002 "unmap": true, 00:19:05.002 "flush": true, 00:19:05.002 "reset": true, 00:19:05.002 "nvme_admin": false, 00:19:05.002 "nvme_io": false, 00:19:05.002 "nvme_io_md": false, 00:19:05.002 "write_zeroes": true, 00:19:05.002 "zcopy": true, 00:19:05.002 "get_zone_info": false, 00:19:05.002 "zone_management": false, 00:19:05.002 "zone_append": false, 00:19:05.002 "compare": false, 00:19:05.002 "compare_and_write": false, 00:19:05.002 "abort": true, 00:19:05.002 "seek_hole": false, 00:19:05.002 "seek_data": false, 00:19:05.002 "copy": true, 00:19:05.002 "nvme_iov_md": false 00:19:05.002 }, 00:19:05.002 "memory_domains": [ 00:19:05.002 { 00:19:05.002 "dma_device_id": "system", 00:19:05.002 "dma_device_type": 1 00:19:05.002 }, 00:19:05.002 { 00:19:05.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.002 "dma_device_type": 2 00:19:05.002 } 00:19:05.002 ], 00:19:05.003 "driver_specific": {} 00:19:05.003 }' 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:05.003 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.262 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.262 "name": "BaseBdev2", 00:19:05.262 "aliases": [ 00:19:05.262 "fc20a5ef-4dd2-4775-a2da-e8306ddc330e" 00:19:05.262 ], 00:19:05.262 "product_name": "Malloc disk", 00:19:05.262 "block_size": 512, 00:19:05.262 "num_blocks": 65536, 00:19:05.262 "uuid": "fc20a5ef-4dd2-4775-a2da-e8306ddc330e", 00:19:05.262 "assigned_rate_limits": { 00:19:05.262 "rw_ios_per_sec": 0, 00:19:05.262 "rw_mbytes_per_sec": 0, 00:19:05.262 "r_mbytes_per_sec": 0, 00:19:05.262 "w_mbytes_per_sec": 0 00:19:05.262 }, 00:19:05.262 "claimed": true, 00:19:05.262 "claim_type": "exclusive_write", 00:19:05.262 "zoned": false, 00:19:05.262 "supported_io_types": { 00:19:05.262 "read": true, 00:19:05.262 "write": true, 00:19:05.262 "unmap": true, 00:19:05.262 "flush": true, 00:19:05.262 "reset": true, 00:19:05.262 "nvme_admin": false, 00:19:05.262 "nvme_io": false, 00:19:05.262 "nvme_io_md": false, 00:19:05.262 "write_zeroes": true, 00:19:05.262 "zcopy": true, 00:19:05.262 "get_zone_info": false, 00:19:05.262 "zone_management": false, 00:19:05.262 "zone_append": false, 00:19:05.262 "compare": false, 00:19:05.262 "compare_and_write": false, 00:19:05.262 "abort": true, 00:19:05.262 "seek_hole": false, 00:19:05.262 "seek_data": false, 00:19:05.262 "copy": true, 00:19:05.262 "nvme_iov_md": false 00:19:05.262 }, 00:19:05.262 "memory_domains": [ 00:19:05.262 { 00:19:05.262 "dma_device_id": "system", 00:19:05.262 "dma_device_type": 1 00:19:05.262 }, 00:19:05.262 { 00:19:05.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.262 "dma_device_type": 2 00:19:05.262 } 00:19:05.262 ], 00:19:05.262 "driver_specific": {} 00:19:05.262 }' 00:19:05.262 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.262 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.262 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.262 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.523 15:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:05.782 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.782 "name": "BaseBdev3", 00:19:05.782 "aliases": [ 00:19:05.782 "a2733b14-5844-4a6e-938f-e7d4c0b3ca60" 00:19:05.783 ], 00:19:05.783 "product_name": "Malloc disk", 00:19:05.783 "block_size": 512, 00:19:05.783 "num_blocks": 65536, 00:19:05.783 "uuid": "a2733b14-5844-4a6e-938f-e7d4c0b3ca60", 00:19:05.783 "assigned_rate_limits": { 00:19:05.783 "rw_ios_per_sec": 0, 00:19:05.783 "rw_mbytes_per_sec": 0, 00:19:05.783 "r_mbytes_per_sec": 0, 00:19:05.783 "w_mbytes_per_sec": 0 00:19:05.783 }, 00:19:05.783 "claimed": true, 00:19:05.783 "claim_type": "exclusive_write", 00:19:05.783 "zoned": false, 00:19:05.783 "supported_io_types": { 00:19:05.783 "read": true, 00:19:05.783 "write": true, 00:19:05.783 "unmap": true, 00:19:05.783 "flush": true, 00:19:05.783 "reset": true, 00:19:05.783 "nvme_admin": false, 00:19:05.783 "nvme_io": false, 00:19:05.783 "nvme_io_md": false, 00:19:05.783 "write_zeroes": true, 00:19:05.783 "zcopy": true, 00:19:05.783 "get_zone_info": false, 00:19:05.783 "zone_management": false, 00:19:05.783 "zone_append": false, 00:19:05.783 "compare": false, 00:19:05.783 "compare_and_write": false, 00:19:05.783 "abort": true, 00:19:05.783 "seek_hole": false, 00:19:05.783 "seek_data": false, 00:19:05.783 "copy": true, 00:19:05.783 "nvme_iov_md": false 00:19:05.783 }, 00:19:05.783 "memory_domains": [ 00:19:05.783 { 00:19:05.783 "dma_device_id": "system", 00:19:05.783 "dma_device_type": 1 00:19:05.783 }, 00:19:05.783 { 00:19:05.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.783 "dma_device_type": 2 00:19:05.783 } 00:19:05.783 ], 00:19:05.783 "driver_specific": {} 00:19:05.783 }' 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.783 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:06.042 [2024-07-23 15:13:01.356915] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.042 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.302 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.302 "name": "Existed_Raid", 00:19:06.302 "uuid": "c27ffcca-ecdd-414d-8b8b-f5fe64c671ee", 00:19:06.302 "strip_size_kb": 0, 00:19:06.302 "state": "online", 00:19:06.302 "raid_level": "raid1", 00:19:06.302 "superblock": true, 00:19:06.302 "num_base_bdevs": 3, 00:19:06.302 "num_base_bdevs_discovered": 2, 00:19:06.302 "num_base_bdevs_operational": 2, 00:19:06.302 "base_bdevs_list": [ 00:19:06.302 { 00:19:06.302 "name": null, 00:19:06.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.302 "is_configured": false, 00:19:06.302 "data_offset": 2048, 00:19:06.302 "data_size": 63488 00:19:06.302 }, 00:19:06.302 { 00:19:06.302 "name": "BaseBdev2", 00:19:06.302 "uuid": "fc20a5ef-4dd2-4775-a2da-e8306ddc330e", 00:19:06.302 "is_configured": true, 00:19:06.302 "data_offset": 2048, 00:19:06.302 "data_size": 63488 00:19:06.302 }, 00:19:06.302 { 00:19:06.302 "name": "BaseBdev3", 00:19:06.302 "uuid": "a2733b14-5844-4a6e-938f-e7d4c0b3ca60", 00:19:06.302 "is_configured": true, 00:19:06.302 "data_offset": 2048, 00:19:06.302 "data_size": 63488 00:19:06.302 } 00:19:06.302 ] 00:19:06.302 }' 00:19:06.302 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.302 15:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.563 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:06.563 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:06.563 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.563 15:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:06.823 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:06.823 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:06.823 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:07.082 [2024-07-23 15:13:02.297702] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:07.082 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:07.082 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:07.082 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.082 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:07.341 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:07.341 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.341 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:07.341 [2024-07-23 15:13:02.738340] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:07.341 [2024-07-23 15:13:02.738458] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.342 [2024-07-23 15:13:02.751227] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.342 [2024-07-23 15:13:02.751281] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.342 [2024-07-23 15:13:02.751297] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:19:07.342 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:07.342 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:07.601 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.601 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:07.601 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:07.601 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:07.601 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:07.601 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:07.601 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:07.601 15:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:07.860 BaseBdev2 00:19:07.860 15:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:07.860 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:07.860 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:07.860 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:07.860 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:07.860 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:07.860 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.118 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:08.118 [ 00:19:08.118 { 00:19:08.118 "name": "BaseBdev2", 00:19:08.118 "aliases": [ 00:19:08.118 "19be1e38-2e60-4067-81cf-02c7c51afef7" 00:19:08.118 ], 00:19:08.118 "product_name": "Malloc disk", 00:19:08.118 "block_size": 512, 00:19:08.118 "num_blocks": 65536, 00:19:08.118 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:08.118 "assigned_rate_limits": { 00:19:08.118 "rw_ios_per_sec": 0, 00:19:08.118 "rw_mbytes_per_sec": 0, 00:19:08.118 "r_mbytes_per_sec": 0, 00:19:08.118 "w_mbytes_per_sec": 0 00:19:08.118 }, 00:19:08.118 "claimed": false, 00:19:08.118 "zoned": false, 00:19:08.118 "supported_io_types": { 00:19:08.118 "read": true, 00:19:08.118 "write": true, 00:19:08.118 "unmap": true, 00:19:08.118 "flush": true, 00:19:08.118 "reset": true, 00:19:08.118 "nvme_admin": false, 00:19:08.118 "nvme_io": false, 00:19:08.118 "nvme_io_md": false, 00:19:08.118 "write_zeroes": true, 00:19:08.118 "zcopy": true, 00:19:08.118 "get_zone_info": false, 00:19:08.118 "zone_management": false, 00:19:08.118 "zone_append": false, 00:19:08.118 "compare": false, 00:19:08.118 "compare_and_write": false, 00:19:08.118 "abort": true, 00:19:08.118 "seek_hole": false, 00:19:08.118 "seek_data": false, 00:19:08.118 "copy": true, 00:19:08.118 "nvme_iov_md": false 00:19:08.118 }, 00:19:08.118 "memory_domains": [ 00:19:08.118 { 00:19:08.118 "dma_device_id": "system", 00:19:08.118 "dma_device_type": 1 00:19:08.118 }, 00:19:08.118 { 00:19:08.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.118 "dma_device_type": 2 00:19:08.118 } 00:19:08.118 ], 00:19:08.118 "driver_specific": {} 00:19:08.118 } 00:19:08.118 ] 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:08.376 BaseBdev3 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:08.376 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.634 15:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:08.893 [ 00:19:08.893 { 00:19:08.893 "name": "BaseBdev3", 00:19:08.893 "aliases": [ 00:19:08.894 "0f57450f-d85b-4adc-99fd-ff06c0d081f2" 00:19:08.894 ], 00:19:08.894 "product_name": "Malloc disk", 00:19:08.894 "block_size": 512, 00:19:08.894 "num_blocks": 65536, 00:19:08.894 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:08.894 "assigned_rate_limits": { 00:19:08.894 "rw_ios_per_sec": 0, 00:19:08.894 "rw_mbytes_per_sec": 0, 00:19:08.894 "r_mbytes_per_sec": 0, 00:19:08.894 "w_mbytes_per_sec": 0 00:19:08.894 }, 00:19:08.894 "claimed": false, 00:19:08.894 "zoned": false, 00:19:08.894 "supported_io_types": { 00:19:08.894 "read": true, 00:19:08.894 "write": true, 00:19:08.894 "unmap": true, 00:19:08.894 "flush": true, 00:19:08.894 "reset": true, 00:19:08.894 "nvme_admin": false, 00:19:08.894 "nvme_io": false, 00:19:08.894 "nvme_io_md": false, 00:19:08.894 "write_zeroes": true, 00:19:08.894 "zcopy": true, 00:19:08.894 "get_zone_info": false, 00:19:08.894 "zone_management": false, 00:19:08.894 "zone_append": false, 00:19:08.894 "compare": false, 00:19:08.894 "compare_and_write": false, 00:19:08.894 "abort": true, 00:19:08.894 "seek_hole": false, 00:19:08.894 "seek_data": false, 00:19:08.894 "copy": true, 00:19:08.894 "nvme_iov_md": false 00:19:08.894 }, 00:19:08.894 "memory_domains": [ 00:19:08.894 { 00:19:08.894 "dma_device_id": "system", 00:19:08.894 "dma_device_type": 1 00:19:08.894 }, 00:19:08.894 { 00:19:08.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.894 "dma_device_type": 2 00:19:08.894 } 00:19:08.894 ], 00:19:08.894 "driver_specific": {} 00:19:08.894 } 00:19:08.894 ] 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:08.894 [2024-07-23 15:13:04.294715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.894 [2024-07-23 15:13:04.294776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.894 [2024-07-23 15:13:04.294831] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.894 [2024-07-23 15:13:04.296932] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.894 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.153 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.153 "name": "Existed_Raid", 00:19:09.153 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:09.153 "strip_size_kb": 0, 00:19:09.153 "state": "configuring", 00:19:09.153 "raid_level": "raid1", 00:19:09.153 "superblock": true, 00:19:09.153 "num_base_bdevs": 3, 00:19:09.153 "num_base_bdevs_discovered": 2, 00:19:09.153 "num_base_bdevs_operational": 3, 00:19:09.153 "base_bdevs_list": [ 00:19:09.153 { 00:19:09.153 "name": "BaseBdev1", 00:19:09.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.153 "is_configured": false, 00:19:09.153 "data_offset": 0, 00:19:09.153 "data_size": 0 00:19:09.153 }, 00:19:09.153 { 00:19:09.153 "name": "BaseBdev2", 00:19:09.153 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:09.153 "is_configured": true, 00:19:09.153 "data_offset": 2048, 00:19:09.153 "data_size": 63488 00:19:09.153 }, 00:19:09.153 { 00:19:09.153 "name": "BaseBdev3", 00:19:09.153 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:09.153 "is_configured": true, 00:19:09.153 "data_offset": 2048, 00:19:09.153 "data_size": 63488 00:19:09.153 } 00:19:09.153 ] 00:19:09.153 }' 00:19:09.153 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.153 15:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 15:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:09.672 [2024-07-23 15:13:04.990837] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.672 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.931 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.931 "name": "Existed_Raid", 00:19:09.931 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:09.931 "strip_size_kb": 0, 00:19:09.931 "state": "configuring", 00:19:09.931 "raid_level": "raid1", 00:19:09.931 "superblock": true, 00:19:09.931 "num_base_bdevs": 3, 00:19:09.931 "num_base_bdevs_discovered": 1, 00:19:09.931 "num_base_bdevs_operational": 3, 00:19:09.931 "base_bdevs_list": [ 00:19:09.931 { 00:19:09.931 "name": "BaseBdev1", 00:19:09.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.931 "is_configured": false, 00:19:09.931 "data_offset": 0, 00:19:09.931 "data_size": 0 00:19:09.931 }, 00:19:09.931 { 00:19:09.931 "name": null, 00:19:09.931 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:09.931 "is_configured": false, 00:19:09.931 "data_offset": 2048, 00:19:09.931 "data_size": 63488 00:19:09.931 }, 00:19:09.931 { 00:19:09.931 "name": "BaseBdev3", 00:19:09.931 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:09.931 "is_configured": true, 00:19:09.931 "data_offset": 2048, 00:19:09.931 "data_size": 63488 00:19:09.931 } 00:19:09.931 ] 00:19:09.931 }' 00:19:09.931 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.931 15:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.191 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.191 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:10.450 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:10.450 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:10.450 [2024-07-23 15:13:05.874646] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.450 BaseBdev1 00:19:10.709 15:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:10.709 15:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:10.709 15:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:10.709 15:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:10.709 15:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:10.709 15:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:10.709 15:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:10.709 15:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:10.968 [ 00:19:10.968 { 00:19:10.968 "name": "BaseBdev1", 00:19:10.968 "aliases": [ 00:19:10.968 "c2e9c237-de5a-4da3-971e-ffaf3288895b" 00:19:10.968 ], 00:19:10.968 "product_name": "Malloc disk", 00:19:10.968 "block_size": 512, 00:19:10.968 "num_blocks": 65536, 00:19:10.968 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:10.968 "assigned_rate_limits": { 00:19:10.968 "rw_ios_per_sec": 0, 00:19:10.968 "rw_mbytes_per_sec": 0, 00:19:10.968 "r_mbytes_per_sec": 0, 00:19:10.968 "w_mbytes_per_sec": 0 00:19:10.968 }, 00:19:10.968 "claimed": true, 00:19:10.968 "claim_type": "exclusive_write", 00:19:10.968 "zoned": false, 00:19:10.968 "supported_io_types": { 00:19:10.968 "read": true, 00:19:10.968 "write": true, 00:19:10.969 "unmap": true, 00:19:10.969 "flush": true, 00:19:10.969 "reset": true, 00:19:10.969 "nvme_admin": false, 00:19:10.969 "nvme_io": false, 00:19:10.969 "nvme_io_md": false, 00:19:10.969 "write_zeroes": true, 00:19:10.969 "zcopy": true, 00:19:10.969 "get_zone_info": false, 00:19:10.969 "zone_management": false, 00:19:10.969 "zone_append": false, 00:19:10.969 "compare": false, 00:19:10.969 "compare_and_write": false, 00:19:10.969 "abort": true, 00:19:10.969 "seek_hole": false, 00:19:10.969 "seek_data": false, 00:19:10.969 "copy": true, 00:19:10.969 "nvme_iov_md": false 00:19:10.969 }, 00:19:10.969 "memory_domains": [ 00:19:10.969 { 00:19:10.969 "dma_device_id": "system", 00:19:10.969 "dma_device_type": 1 00:19:10.969 }, 00:19:10.969 { 00:19:10.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.969 "dma_device_type": 2 00:19:10.969 } 00:19:10.969 ], 00:19:10.969 "driver_specific": {} 00:19:10.969 } 00:19:10.969 ] 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.969 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.228 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.228 "name": "Existed_Raid", 00:19:11.228 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:11.228 "strip_size_kb": 0, 00:19:11.228 "state": "configuring", 00:19:11.228 "raid_level": "raid1", 00:19:11.228 "superblock": true, 00:19:11.228 "num_base_bdevs": 3, 00:19:11.228 "num_base_bdevs_discovered": 2, 00:19:11.228 "num_base_bdevs_operational": 3, 00:19:11.228 "base_bdevs_list": [ 00:19:11.228 { 00:19:11.228 "name": "BaseBdev1", 00:19:11.228 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:11.228 "is_configured": true, 00:19:11.228 "data_offset": 2048, 00:19:11.228 "data_size": 63488 00:19:11.228 }, 00:19:11.228 { 00:19:11.228 "name": null, 00:19:11.228 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:11.228 "is_configured": false, 00:19:11.228 "data_offset": 2048, 00:19:11.228 "data_size": 63488 00:19:11.228 }, 00:19:11.228 { 00:19:11.228 "name": "BaseBdev3", 00:19:11.228 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:11.228 "is_configured": true, 00:19:11.228 "data_offset": 2048, 00:19:11.228 "data_size": 63488 00:19:11.228 } 00:19:11.228 ] 00:19:11.228 }' 00:19:11.228 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.228 15:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.487 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.487 15:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:11.746 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:11.746 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:12.005 [2024-07-23 15:13:07.271052] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.005 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.264 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.264 "name": "Existed_Raid", 00:19:12.264 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:12.264 "strip_size_kb": 0, 00:19:12.264 "state": "configuring", 00:19:12.264 "raid_level": "raid1", 00:19:12.264 "superblock": true, 00:19:12.265 "num_base_bdevs": 3, 00:19:12.265 "num_base_bdevs_discovered": 1, 00:19:12.265 "num_base_bdevs_operational": 3, 00:19:12.265 "base_bdevs_list": [ 00:19:12.265 { 00:19:12.265 "name": "BaseBdev1", 00:19:12.265 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:12.265 "is_configured": true, 00:19:12.265 "data_offset": 2048, 00:19:12.265 "data_size": 63488 00:19:12.265 }, 00:19:12.265 { 00:19:12.265 "name": null, 00:19:12.265 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:12.265 "is_configured": false, 00:19:12.265 "data_offset": 2048, 00:19:12.265 "data_size": 63488 00:19:12.265 }, 00:19:12.265 { 00:19:12.265 "name": null, 00:19:12.265 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:12.265 "is_configured": false, 00:19:12.265 "data_offset": 2048, 00:19:12.265 "data_size": 63488 00:19:12.265 } 00:19:12.265 ] 00:19:12.265 }' 00:19:12.265 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.265 15:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.524 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.524 15:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:12.783 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:12.783 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:13.042 [2024-07-23 15:13:08.223293] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.042 "name": "Existed_Raid", 00:19:13.042 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:13.042 "strip_size_kb": 0, 00:19:13.042 "state": "configuring", 00:19:13.042 "raid_level": "raid1", 00:19:13.042 "superblock": true, 00:19:13.042 "num_base_bdevs": 3, 00:19:13.042 "num_base_bdevs_discovered": 2, 00:19:13.042 "num_base_bdevs_operational": 3, 00:19:13.042 "base_bdevs_list": [ 00:19:13.042 { 00:19:13.042 "name": "BaseBdev1", 00:19:13.042 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:13.042 "is_configured": true, 00:19:13.042 "data_offset": 2048, 00:19:13.042 "data_size": 63488 00:19:13.042 }, 00:19:13.042 { 00:19:13.042 "name": null, 00:19:13.042 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:13.042 "is_configured": false, 00:19:13.042 "data_offset": 2048, 00:19:13.042 "data_size": 63488 00:19:13.042 }, 00:19:13.042 { 00:19:13.042 "name": "BaseBdev3", 00:19:13.042 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:13.042 "is_configured": true, 00:19:13.042 "data_offset": 2048, 00:19:13.042 "data_size": 63488 00:19:13.042 } 00:19:13.042 ] 00:19:13.042 }' 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.042 15:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.301 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.301 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:13.560 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:13.560 15:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:13.819 [2024-07-23 15:13:09.151470] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.819 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:13.819 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.819 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:13.819 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:13.819 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:13.820 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.820 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.820 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.820 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.820 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.820 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.820 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.079 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.079 "name": "Existed_Raid", 00:19:14.079 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:14.079 "strip_size_kb": 0, 00:19:14.079 "state": "configuring", 00:19:14.079 "raid_level": "raid1", 00:19:14.079 "superblock": true, 00:19:14.079 "num_base_bdevs": 3, 00:19:14.079 "num_base_bdevs_discovered": 1, 00:19:14.079 "num_base_bdevs_operational": 3, 00:19:14.079 "base_bdevs_list": [ 00:19:14.079 { 00:19:14.079 "name": null, 00:19:14.079 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:14.079 "is_configured": false, 00:19:14.079 "data_offset": 2048, 00:19:14.079 "data_size": 63488 00:19:14.079 }, 00:19:14.079 { 00:19:14.079 "name": null, 00:19:14.079 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:14.079 "is_configured": false, 00:19:14.079 "data_offset": 2048, 00:19:14.079 "data_size": 63488 00:19:14.079 }, 00:19:14.079 { 00:19:14.079 "name": "BaseBdev3", 00:19:14.079 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:14.079 "is_configured": true, 00:19:14.079 "data_offset": 2048, 00:19:14.079 "data_size": 63488 00:19:14.079 } 00:19:14.079 ] 00:19:14.079 }' 00:19:14.079 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.079 15:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.365 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.366 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:14.659 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:14.659 15:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:14.659 [2024-07-23 15:13:10.036246] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.659 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.919 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.919 "name": "Existed_Raid", 00:19:14.919 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:14.919 "strip_size_kb": 0, 00:19:14.919 "state": "configuring", 00:19:14.919 "raid_level": "raid1", 00:19:14.919 "superblock": true, 00:19:14.919 "num_base_bdevs": 3, 00:19:14.919 "num_base_bdevs_discovered": 2, 00:19:14.919 "num_base_bdevs_operational": 3, 00:19:14.919 "base_bdevs_list": [ 00:19:14.919 { 00:19:14.919 "name": null, 00:19:14.919 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:14.919 "is_configured": false, 00:19:14.919 "data_offset": 2048, 00:19:14.919 "data_size": 63488 00:19:14.919 }, 00:19:14.919 { 00:19:14.919 "name": "BaseBdev2", 00:19:14.919 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:14.919 "is_configured": true, 00:19:14.919 "data_offset": 2048, 00:19:14.919 "data_size": 63488 00:19:14.919 }, 00:19:14.919 { 00:19:14.919 "name": "BaseBdev3", 00:19:14.919 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:14.919 "is_configured": true, 00:19:14.919 "data_offset": 2048, 00:19:14.919 "data_size": 63488 00:19:14.919 } 00:19:14.919 ] 00:19:14.919 }' 00:19:14.919 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.919 15:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.486 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.486 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:15.744 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:15.744 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.744 15:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:15.744 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c2e9c237-de5a-4da3-971e-ffaf3288895b 00:19:16.004 [2024-07-23 15:13:11.331935] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:16.004 [2024-07-23 15:13:11.332121] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:19:16.004 [2024-07-23 15:13:11.332136] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:16.004 [2024-07-23 15:13:11.332214] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:19:16.004 [2024-07-23 15:13:11.332500] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:19:16.004 [2024-07-23 15:13:11.332524] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007880 00:19:16.004 [2024-07-23 15:13:11.332620] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.004 NewBaseBdev 00:19:16.004 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:16.004 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:16.004 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:16.004 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:16.004 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:16.004 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:16.004 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.263 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:16.522 [ 00:19:16.522 { 00:19:16.522 "name": "NewBaseBdev", 00:19:16.522 "aliases": [ 00:19:16.522 "c2e9c237-de5a-4da3-971e-ffaf3288895b" 00:19:16.522 ], 00:19:16.522 "product_name": "Malloc disk", 00:19:16.522 "block_size": 512, 00:19:16.522 "num_blocks": 65536, 00:19:16.522 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:16.522 "assigned_rate_limits": { 00:19:16.522 "rw_ios_per_sec": 0, 00:19:16.522 "rw_mbytes_per_sec": 0, 00:19:16.522 "r_mbytes_per_sec": 0, 00:19:16.522 "w_mbytes_per_sec": 0 00:19:16.522 }, 00:19:16.522 "claimed": true, 00:19:16.522 "claim_type": "exclusive_write", 00:19:16.522 "zoned": false, 00:19:16.522 "supported_io_types": { 00:19:16.522 "read": true, 00:19:16.522 "write": true, 00:19:16.522 "unmap": true, 00:19:16.522 "flush": true, 00:19:16.522 "reset": true, 00:19:16.522 "nvme_admin": false, 00:19:16.522 "nvme_io": false, 00:19:16.522 "nvme_io_md": false, 00:19:16.522 "write_zeroes": true, 00:19:16.522 "zcopy": true, 00:19:16.522 "get_zone_info": false, 00:19:16.522 "zone_management": false, 00:19:16.522 "zone_append": false, 00:19:16.522 "compare": false, 00:19:16.522 "compare_and_write": false, 00:19:16.522 "abort": true, 00:19:16.522 "seek_hole": false, 00:19:16.522 "seek_data": false, 00:19:16.522 "copy": true, 00:19:16.522 "nvme_iov_md": false 00:19:16.522 }, 00:19:16.522 "memory_domains": [ 00:19:16.522 { 00:19:16.522 "dma_device_id": "system", 00:19:16.522 "dma_device_type": 1 00:19:16.522 }, 00:19:16.522 { 00:19:16.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.522 "dma_device_type": 2 00:19:16.522 } 00:19:16.522 ], 00:19:16.522 "driver_specific": {} 00:19:16.522 } 00:19:16.522 ] 00:19:16.522 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:16.522 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:16.522 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.522 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:16.522 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:16.522 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:16.522 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:16.522 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.523 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.523 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.523 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.523 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.523 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.781 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.781 "name": "Existed_Raid", 00:19:16.781 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:16.781 "strip_size_kb": 0, 00:19:16.781 "state": "online", 00:19:16.781 "raid_level": "raid1", 00:19:16.781 "superblock": true, 00:19:16.781 "num_base_bdevs": 3, 00:19:16.781 "num_base_bdevs_discovered": 3, 00:19:16.781 "num_base_bdevs_operational": 3, 00:19:16.781 "base_bdevs_list": [ 00:19:16.781 { 00:19:16.781 "name": "NewBaseBdev", 00:19:16.781 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:16.781 "is_configured": true, 00:19:16.781 "data_offset": 2048, 00:19:16.781 "data_size": 63488 00:19:16.781 }, 00:19:16.781 { 00:19:16.781 "name": "BaseBdev2", 00:19:16.781 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:16.781 "is_configured": true, 00:19:16.781 "data_offset": 2048, 00:19:16.781 "data_size": 63488 00:19:16.781 }, 00:19:16.781 { 00:19:16.781 "name": "BaseBdev3", 00:19:16.781 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:16.781 "is_configured": true, 00:19:16.781 "data_offset": 2048, 00:19:16.781 "data_size": 63488 00:19:16.781 } 00:19:16.781 ] 00:19:16.781 }' 00:19:16.781 15:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.781 15:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:17.041 [2024-07-23 15:13:12.448545] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:17.041 "name": "Existed_Raid", 00:19:17.041 "aliases": [ 00:19:17.041 "01e10c4a-1534-42bc-940e-5d742be28824" 00:19:17.041 ], 00:19:17.041 "product_name": "Raid Volume", 00:19:17.041 "block_size": 512, 00:19:17.041 "num_blocks": 63488, 00:19:17.041 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:17.041 "assigned_rate_limits": { 00:19:17.041 "rw_ios_per_sec": 0, 00:19:17.041 "rw_mbytes_per_sec": 0, 00:19:17.041 "r_mbytes_per_sec": 0, 00:19:17.041 "w_mbytes_per_sec": 0 00:19:17.041 }, 00:19:17.041 "claimed": false, 00:19:17.041 "zoned": false, 00:19:17.041 "supported_io_types": { 00:19:17.041 "read": true, 00:19:17.041 "write": true, 00:19:17.041 "unmap": false, 00:19:17.041 "flush": false, 00:19:17.041 "reset": true, 00:19:17.041 "nvme_admin": false, 00:19:17.041 "nvme_io": false, 00:19:17.041 "nvme_io_md": false, 00:19:17.041 "write_zeroes": true, 00:19:17.041 "zcopy": false, 00:19:17.041 "get_zone_info": false, 00:19:17.041 "zone_management": false, 00:19:17.041 "zone_append": false, 00:19:17.041 "compare": false, 00:19:17.041 "compare_and_write": false, 00:19:17.041 "abort": false, 00:19:17.041 "seek_hole": false, 00:19:17.041 "seek_data": false, 00:19:17.041 "copy": false, 00:19:17.041 "nvme_iov_md": false 00:19:17.041 }, 00:19:17.041 "memory_domains": [ 00:19:17.041 { 00:19:17.041 "dma_device_id": "system", 00:19:17.041 "dma_device_type": 1 00:19:17.041 }, 00:19:17.041 { 00:19:17.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.041 "dma_device_type": 2 00:19:17.041 }, 00:19:17.041 { 00:19:17.041 "dma_device_id": "system", 00:19:17.041 "dma_device_type": 1 00:19:17.041 }, 00:19:17.041 { 00:19:17.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.041 "dma_device_type": 2 00:19:17.041 }, 00:19:17.041 { 00:19:17.041 "dma_device_id": "system", 00:19:17.041 "dma_device_type": 1 00:19:17.041 }, 00:19:17.041 { 00:19:17.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.041 "dma_device_type": 2 00:19:17.041 } 00:19:17.041 ], 00:19:17.041 "driver_specific": { 00:19:17.041 "raid": { 00:19:17.041 "uuid": "01e10c4a-1534-42bc-940e-5d742be28824", 00:19:17.041 "strip_size_kb": 0, 00:19:17.041 "state": "online", 00:19:17.041 "raid_level": "raid1", 00:19:17.041 "superblock": true, 00:19:17.041 "num_base_bdevs": 3, 00:19:17.041 "num_base_bdevs_discovered": 3, 00:19:17.041 "num_base_bdevs_operational": 3, 00:19:17.041 "base_bdevs_list": [ 00:19:17.041 { 00:19:17.041 "name": "NewBaseBdev", 00:19:17.041 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:17.041 "is_configured": true, 00:19:17.041 "data_offset": 2048, 00:19:17.041 "data_size": 63488 00:19:17.041 }, 00:19:17.041 { 00:19:17.041 "name": "BaseBdev2", 00:19:17.041 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:17.041 "is_configured": true, 00:19:17.041 "data_offset": 2048, 00:19:17.041 "data_size": 63488 00:19:17.041 }, 00:19:17.041 { 00:19:17.041 "name": "BaseBdev3", 00:19:17.041 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:17.041 "is_configured": true, 00:19:17.041 "data_offset": 2048, 00:19:17.041 "data_size": 63488 00:19:17.041 } 00:19:17.041 ] 00:19:17.041 } 00:19:17.041 } 00:19:17.041 }' 00:19:17.041 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:17.301 BaseBdev2 00:19:17.301 BaseBdev3' 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:17.301 "name": "NewBaseBdev", 00:19:17.301 "aliases": [ 00:19:17.301 "c2e9c237-de5a-4da3-971e-ffaf3288895b" 00:19:17.301 ], 00:19:17.301 "product_name": "Malloc disk", 00:19:17.301 "block_size": 512, 00:19:17.301 "num_blocks": 65536, 00:19:17.301 "uuid": "c2e9c237-de5a-4da3-971e-ffaf3288895b", 00:19:17.301 "assigned_rate_limits": { 00:19:17.301 "rw_ios_per_sec": 0, 00:19:17.301 "rw_mbytes_per_sec": 0, 00:19:17.301 "r_mbytes_per_sec": 0, 00:19:17.301 "w_mbytes_per_sec": 0 00:19:17.301 }, 00:19:17.301 "claimed": true, 00:19:17.301 "claim_type": "exclusive_write", 00:19:17.301 "zoned": false, 00:19:17.301 "supported_io_types": { 00:19:17.301 "read": true, 00:19:17.301 "write": true, 00:19:17.301 "unmap": true, 00:19:17.301 "flush": true, 00:19:17.301 "reset": true, 00:19:17.301 "nvme_admin": false, 00:19:17.301 "nvme_io": false, 00:19:17.301 "nvme_io_md": false, 00:19:17.301 "write_zeroes": true, 00:19:17.301 "zcopy": true, 00:19:17.301 "get_zone_info": false, 00:19:17.301 "zone_management": false, 00:19:17.301 "zone_append": false, 00:19:17.301 "compare": false, 00:19:17.301 "compare_and_write": false, 00:19:17.301 "abort": true, 00:19:17.301 "seek_hole": false, 00:19:17.301 "seek_data": false, 00:19:17.301 "copy": true, 00:19:17.301 "nvme_iov_md": false 00:19:17.301 }, 00:19:17.301 "memory_domains": [ 00:19:17.301 { 00:19:17.301 "dma_device_id": "system", 00:19:17.301 "dma_device_type": 1 00:19:17.301 }, 00:19:17.301 { 00:19:17.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.301 "dma_device_type": 2 00:19:17.301 } 00:19:17.301 ], 00:19:17.301 "driver_specific": {} 00:19:17.301 }' 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.301 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:17.560 "name": "BaseBdev2", 00:19:17.560 "aliases": [ 00:19:17.560 "19be1e38-2e60-4067-81cf-02c7c51afef7" 00:19:17.560 ], 00:19:17.560 "product_name": "Malloc disk", 00:19:17.560 "block_size": 512, 00:19:17.560 "num_blocks": 65536, 00:19:17.560 "uuid": "19be1e38-2e60-4067-81cf-02c7c51afef7", 00:19:17.560 "assigned_rate_limits": { 00:19:17.560 "rw_ios_per_sec": 0, 00:19:17.560 "rw_mbytes_per_sec": 0, 00:19:17.560 "r_mbytes_per_sec": 0, 00:19:17.560 "w_mbytes_per_sec": 0 00:19:17.560 }, 00:19:17.560 "claimed": true, 00:19:17.560 "claim_type": "exclusive_write", 00:19:17.560 "zoned": false, 00:19:17.560 "supported_io_types": { 00:19:17.560 "read": true, 00:19:17.560 "write": true, 00:19:17.560 "unmap": true, 00:19:17.560 "flush": true, 00:19:17.560 "reset": true, 00:19:17.560 "nvme_admin": false, 00:19:17.560 "nvme_io": false, 00:19:17.560 "nvme_io_md": false, 00:19:17.560 "write_zeroes": true, 00:19:17.560 "zcopy": true, 00:19:17.560 "get_zone_info": false, 00:19:17.560 "zone_management": false, 00:19:17.560 "zone_append": false, 00:19:17.560 "compare": false, 00:19:17.560 "compare_and_write": false, 00:19:17.560 "abort": true, 00:19:17.560 "seek_hole": false, 00:19:17.560 "seek_data": false, 00:19:17.560 "copy": true, 00:19:17.560 "nvme_iov_md": false 00:19:17.560 }, 00:19:17.560 "memory_domains": [ 00:19:17.560 { 00:19:17.560 "dma_device_id": "system", 00:19:17.560 "dma_device_type": 1 00:19:17.560 }, 00:19:17.560 { 00:19:17.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.560 "dma_device_type": 2 00:19:17.560 } 00:19:17.560 ], 00:19:17.560 "driver_specific": {} 00:19:17.560 }' 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:17.560 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.561 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.561 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:17.820 15:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:17.820 "name": "BaseBdev3", 00:19:17.820 "aliases": [ 00:19:17.820 "0f57450f-d85b-4adc-99fd-ff06c0d081f2" 00:19:17.820 ], 00:19:17.820 "product_name": "Malloc disk", 00:19:17.820 "block_size": 512, 00:19:17.820 "num_blocks": 65536, 00:19:17.820 "uuid": "0f57450f-d85b-4adc-99fd-ff06c0d081f2", 00:19:17.820 "assigned_rate_limits": { 00:19:17.820 "rw_ios_per_sec": 0, 00:19:17.820 "rw_mbytes_per_sec": 0, 00:19:17.820 "r_mbytes_per_sec": 0, 00:19:17.820 "w_mbytes_per_sec": 0 00:19:17.820 }, 00:19:17.820 "claimed": true, 00:19:17.820 "claim_type": "exclusive_write", 00:19:17.820 "zoned": false, 00:19:17.820 "supported_io_types": { 00:19:17.820 "read": true, 00:19:17.820 "write": true, 00:19:17.820 "unmap": true, 00:19:17.820 "flush": true, 00:19:17.820 "reset": true, 00:19:17.820 "nvme_admin": false, 00:19:17.820 "nvme_io": false, 00:19:17.820 "nvme_io_md": false, 00:19:17.820 "write_zeroes": true, 00:19:17.820 "zcopy": true, 00:19:17.820 "get_zone_info": false, 00:19:17.820 "zone_management": false, 00:19:17.820 "zone_append": false, 00:19:17.820 "compare": false, 00:19:17.820 "compare_and_write": false, 00:19:17.820 "abort": true, 00:19:17.820 "seek_hole": false, 00:19:17.820 "seek_data": false, 00:19:17.820 "copy": true, 00:19:17.820 "nvme_iov_md": false 00:19:17.820 }, 00:19:17.820 "memory_domains": [ 00:19:17.820 { 00:19:17.820 "dma_device_id": "system", 00:19:17.820 "dma_device_type": 1 00:19:17.820 }, 00:19:17.820 { 00:19:17.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.820 "dma_device_type": 2 00:19:17.820 } 00:19:17.820 ], 00:19:17.820 "driver_specific": {} 00:19:17.820 }' 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:17.820 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:18.080 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:18.080 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:18.080 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:18.080 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:18.080 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:18.080 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:18.339 [2024-07-23 15:13:13.524473] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:18.339 [2024-07-23 15:13:13.524518] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.339 [2024-07-23 15:13:13.524601] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.339 [2024-07-23 15:13:13.524885] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.339 [2024-07-23 15:13:13.524900] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name Existed_Raid, state offline 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 97050 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 97050 ']' 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 97050 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97050 00:19:18.339 killing process with pid 97050 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97050' 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 97050 00:19:18.339 [2024-07-23 15:13:13.583711] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:18.339 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 97050 00:19:18.339 [2024-07-23 15:13:13.619937] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.598 ************************************ 00:19:18.598 END TEST raid_state_function_test_sb 00:19:18.598 ************************************ 00:19:18.598 15:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:18.598 00:19:18.598 real 0m20.507s 00:19:18.598 user 0m35.764s 00:19:18.599 sys 0m4.475s 00:19:18.599 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:18.599 15:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.599 15:13:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:18.599 15:13:13 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:19:18.599 15:13:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:18.599 15:13:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:18.599 15:13:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.599 ************************************ 00:19:18.599 START TEST raid_superblock_test 00:19:18.599 ************************************ 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=97898 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 97898 /var/tmp/spdk-raid.sock 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 97898 ']' 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.599 15:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.599 [2024-07-23 15:13:14.006663] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:19:18.599 [2024-07-23 15:13:14.006930] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97898 ] 00:19:18.858 [2024-07-23 15:13:14.160418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.858 [2024-07-23 15:13:14.208780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.858 [2024-07-23 15:13:14.254515] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.795 15:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:19.795 malloc1 00:19:19.795 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:20.054 [2024-07-23 15:13:15.430543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:20.054 [2024-07-23 15:13:15.430782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.054 [2024-07-23 15:13:15.430836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:19:20.054 [2024-07-23 15:13:15.430856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.055 [2024-07-23 15:13:15.433371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.055 [2024-07-23 15:13:15.433421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:20.055 pt1 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:20.055 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:20.314 malloc2 00:19:20.314 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.573 [2024-07-23 15:13:15.792263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.573 [2024-07-23 15:13:15.792553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.573 [2024-07-23 15:13:15.792586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:19:20.573 [2024-07-23 15:13:15.792604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.573 [2024-07-23 15:13:15.795129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.573 [2024-07-23 15:13:15.795173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.573 pt2 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:20.573 15:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:20.832 malloc3 00:19:20.832 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:20.832 [2024-07-23 15:13:16.215619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:20.832 [2024-07-23 15:13:16.215708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.832 [2024-07-23 15:13:16.215734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:19:20.832 [2024-07-23 15:13:16.215749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.832 [2024-07-23 15:13:16.218201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.832 pt3 00:19:20.832 [2024-07-23 15:13:16.218371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:20.832 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:20.832 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:20.832 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:21.091 [2024-07-23 15:13:16.395676] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:21.091 [2024-07-23 15:13:16.398030] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.091 [2024-07-23 15:13:16.398240] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:21.091 [2024-07-23 15:13:16.398471] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:19:21.091 [2024-07-23 15:13:16.398592] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:21.091 [2024-07-23 15:13:16.398784] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:19:21.091 [2024-07-23 15:13:16.399249] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:19:21.091 [2024-07-23 15:13:16.399373] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007880 00:19:21.091 [2024-07-23 15:13:16.399681] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.091 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.351 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.351 "name": "raid_bdev1", 00:19:21.351 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:21.351 "strip_size_kb": 0, 00:19:21.351 "state": "online", 00:19:21.351 "raid_level": "raid1", 00:19:21.351 "superblock": true, 00:19:21.351 "num_base_bdevs": 3, 00:19:21.351 "num_base_bdevs_discovered": 3, 00:19:21.351 "num_base_bdevs_operational": 3, 00:19:21.351 "base_bdevs_list": [ 00:19:21.351 { 00:19:21.351 "name": "pt1", 00:19:21.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.351 "is_configured": true, 00:19:21.351 "data_offset": 2048, 00:19:21.351 "data_size": 63488 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "name": "pt2", 00:19:21.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.351 "is_configured": true, 00:19:21.351 "data_offset": 2048, 00:19:21.351 "data_size": 63488 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "name": "pt3", 00:19:21.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.351 "is_configured": true, 00:19:21.351 "data_offset": 2048, 00:19:21.351 "data_size": 63488 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }' 00:19:21.351 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.351 15:13:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.610 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:21.610 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:21.610 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:21.610 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:21.610 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:21.610 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:21.610 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:21.610 15:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:21.869 [2024-07-23 15:13:17.092102] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.869 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:21.869 "name": "raid_bdev1", 00:19:21.869 "aliases": [ 00:19:21.869 "68b0b029-a470-4b76-b1d2-00115875e584" 00:19:21.869 ], 00:19:21.869 "product_name": "Raid Volume", 00:19:21.869 "block_size": 512, 00:19:21.869 "num_blocks": 63488, 00:19:21.869 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:21.869 "assigned_rate_limits": { 00:19:21.869 "rw_ios_per_sec": 0, 00:19:21.869 "rw_mbytes_per_sec": 0, 00:19:21.869 "r_mbytes_per_sec": 0, 00:19:21.869 "w_mbytes_per_sec": 0 00:19:21.870 }, 00:19:21.870 "claimed": false, 00:19:21.870 "zoned": false, 00:19:21.870 "supported_io_types": { 00:19:21.870 "read": true, 00:19:21.870 "write": true, 00:19:21.870 "unmap": false, 00:19:21.870 "flush": false, 00:19:21.870 "reset": true, 00:19:21.870 "nvme_admin": false, 00:19:21.870 "nvme_io": false, 00:19:21.870 "nvme_io_md": false, 00:19:21.870 "write_zeroes": true, 00:19:21.870 "zcopy": false, 00:19:21.870 "get_zone_info": false, 00:19:21.870 "zone_management": false, 00:19:21.870 "zone_append": false, 00:19:21.870 "compare": false, 00:19:21.870 "compare_and_write": false, 00:19:21.870 "abort": false, 00:19:21.870 "seek_hole": false, 00:19:21.870 "seek_data": false, 00:19:21.870 "copy": false, 00:19:21.870 "nvme_iov_md": false 00:19:21.870 }, 00:19:21.870 "memory_domains": [ 00:19:21.870 { 00:19:21.870 "dma_device_id": "system", 00:19:21.870 "dma_device_type": 1 00:19:21.870 }, 00:19:21.870 { 00:19:21.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.870 "dma_device_type": 2 00:19:21.870 }, 00:19:21.870 { 00:19:21.870 "dma_device_id": "system", 00:19:21.870 "dma_device_type": 1 00:19:21.870 }, 00:19:21.870 { 00:19:21.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.870 "dma_device_type": 2 00:19:21.870 }, 00:19:21.870 { 00:19:21.870 "dma_device_id": "system", 00:19:21.870 "dma_device_type": 1 00:19:21.870 }, 00:19:21.870 { 00:19:21.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.870 "dma_device_type": 2 00:19:21.870 } 00:19:21.870 ], 00:19:21.870 "driver_specific": { 00:19:21.870 "raid": { 00:19:21.870 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:21.870 "strip_size_kb": 0, 00:19:21.870 "state": "online", 00:19:21.870 "raid_level": "raid1", 00:19:21.870 "superblock": true, 00:19:21.870 "num_base_bdevs": 3, 00:19:21.870 "num_base_bdevs_discovered": 3, 00:19:21.870 "num_base_bdevs_operational": 3, 00:19:21.870 "base_bdevs_list": [ 00:19:21.870 { 00:19:21.870 "name": "pt1", 00:19:21.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.870 "is_configured": true, 00:19:21.870 "data_offset": 2048, 00:19:21.870 "data_size": 63488 00:19:21.870 }, 00:19:21.870 { 00:19:21.870 "name": "pt2", 00:19:21.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.870 "is_configured": true, 00:19:21.870 "data_offset": 2048, 00:19:21.870 "data_size": 63488 00:19:21.870 }, 00:19:21.870 { 00:19:21.870 "name": "pt3", 00:19:21.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.870 "is_configured": true, 00:19:21.870 "data_offset": 2048, 00:19:21.870 "data_size": 63488 00:19:21.870 } 00:19:21.870 ] 00:19:21.870 } 00:19:21.870 } 00:19:21.870 }' 00:19:21.870 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.870 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:21.870 pt2 00:19:21.870 pt3' 00:19:21.870 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:21.870 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:21.870 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:22.129 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:22.129 "name": "pt1", 00:19:22.129 "aliases": [ 00:19:22.129 "00000000-0000-0000-0000-000000000001" 00:19:22.129 ], 00:19:22.129 "product_name": "passthru", 00:19:22.129 "block_size": 512, 00:19:22.129 "num_blocks": 65536, 00:19:22.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:22.129 "assigned_rate_limits": { 00:19:22.129 "rw_ios_per_sec": 0, 00:19:22.129 "rw_mbytes_per_sec": 0, 00:19:22.129 "r_mbytes_per_sec": 0, 00:19:22.129 "w_mbytes_per_sec": 0 00:19:22.129 }, 00:19:22.129 "claimed": true, 00:19:22.129 "claim_type": "exclusive_write", 00:19:22.129 "zoned": false, 00:19:22.129 "supported_io_types": { 00:19:22.129 "read": true, 00:19:22.129 "write": true, 00:19:22.129 "unmap": true, 00:19:22.129 "flush": true, 00:19:22.129 "reset": true, 00:19:22.129 "nvme_admin": false, 00:19:22.129 "nvme_io": false, 00:19:22.129 "nvme_io_md": false, 00:19:22.129 "write_zeroes": true, 00:19:22.129 "zcopy": true, 00:19:22.129 "get_zone_info": false, 00:19:22.129 "zone_management": false, 00:19:22.129 "zone_append": false, 00:19:22.129 "compare": false, 00:19:22.129 "compare_and_write": false, 00:19:22.129 "abort": true, 00:19:22.129 "seek_hole": false, 00:19:22.129 "seek_data": false, 00:19:22.129 "copy": true, 00:19:22.129 "nvme_iov_md": false 00:19:22.130 }, 00:19:22.130 "memory_domains": [ 00:19:22.130 { 00:19:22.130 "dma_device_id": "system", 00:19:22.130 "dma_device_type": 1 00:19:22.130 }, 00:19:22.130 { 00:19:22.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.130 "dma_device_type": 2 00:19:22.130 } 00:19:22.130 ], 00:19:22.130 "driver_specific": { 00:19:22.130 "passthru": { 00:19:22.130 "name": "pt1", 00:19:22.130 "base_bdev_name": "malloc1" 00:19:22.130 } 00:19:22.130 } 00:19:22.130 }' 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:22.130 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:22.389 "name": "pt2", 00:19:22.389 "aliases": [ 00:19:22.389 "00000000-0000-0000-0000-000000000002" 00:19:22.389 ], 00:19:22.389 "product_name": "passthru", 00:19:22.389 "block_size": 512, 00:19:22.389 "num_blocks": 65536, 00:19:22.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.389 "assigned_rate_limits": { 00:19:22.389 "rw_ios_per_sec": 0, 00:19:22.389 "rw_mbytes_per_sec": 0, 00:19:22.389 "r_mbytes_per_sec": 0, 00:19:22.389 "w_mbytes_per_sec": 0 00:19:22.389 }, 00:19:22.389 "claimed": true, 00:19:22.389 "claim_type": "exclusive_write", 00:19:22.389 "zoned": false, 00:19:22.389 "supported_io_types": { 00:19:22.389 "read": true, 00:19:22.389 "write": true, 00:19:22.389 "unmap": true, 00:19:22.389 "flush": true, 00:19:22.389 "reset": true, 00:19:22.389 "nvme_admin": false, 00:19:22.389 "nvme_io": false, 00:19:22.389 "nvme_io_md": false, 00:19:22.389 "write_zeroes": true, 00:19:22.389 "zcopy": true, 00:19:22.389 "get_zone_info": false, 00:19:22.389 "zone_management": false, 00:19:22.389 "zone_append": false, 00:19:22.389 "compare": false, 00:19:22.389 "compare_and_write": false, 00:19:22.389 "abort": true, 00:19:22.389 "seek_hole": false, 00:19:22.389 "seek_data": false, 00:19:22.389 "copy": true, 00:19:22.389 "nvme_iov_md": false 00:19:22.389 }, 00:19:22.389 "memory_domains": [ 00:19:22.389 { 00:19:22.389 "dma_device_id": "system", 00:19:22.389 "dma_device_type": 1 00:19:22.389 }, 00:19:22.389 { 00:19:22.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.389 "dma_device_type": 2 00:19:22.389 } 00:19:22.389 ], 00:19:22.389 "driver_specific": { 00:19:22.389 "passthru": { 00:19:22.389 "name": "pt2", 00:19:22.389 "base_bdev_name": "malloc2" 00:19:22.389 } 00:19:22.389 } 00:19:22.389 }' 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:22.389 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:22.648 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:22.648 "name": "pt3", 00:19:22.648 "aliases": [ 00:19:22.648 "00000000-0000-0000-0000-000000000003" 00:19:22.648 ], 00:19:22.648 "product_name": "passthru", 00:19:22.648 "block_size": 512, 00:19:22.648 "num_blocks": 65536, 00:19:22.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.648 "assigned_rate_limits": { 00:19:22.648 "rw_ios_per_sec": 0, 00:19:22.648 "rw_mbytes_per_sec": 0, 00:19:22.648 "r_mbytes_per_sec": 0, 00:19:22.648 "w_mbytes_per_sec": 0 00:19:22.648 }, 00:19:22.648 "claimed": true, 00:19:22.648 "claim_type": "exclusive_write", 00:19:22.648 "zoned": false, 00:19:22.648 "supported_io_types": { 00:19:22.648 "read": true, 00:19:22.648 "write": true, 00:19:22.648 "unmap": true, 00:19:22.648 "flush": true, 00:19:22.648 "reset": true, 00:19:22.648 "nvme_admin": false, 00:19:22.648 "nvme_io": false, 00:19:22.648 "nvme_io_md": false, 00:19:22.648 "write_zeroes": true, 00:19:22.648 "zcopy": true, 00:19:22.648 "get_zone_info": false, 00:19:22.648 "zone_management": false, 00:19:22.648 "zone_append": false, 00:19:22.648 "compare": false, 00:19:22.648 "compare_and_write": false, 00:19:22.648 "abort": true, 00:19:22.648 "seek_hole": false, 00:19:22.648 "seek_data": false, 00:19:22.648 "copy": true, 00:19:22.648 "nvme_iov_md": false 00:19:22.648 }, 00:19:22.648 "memory_domains": [ 00:19:22.648 { 00:19:22.648 "dma_device_id": "system", 00:19:22.648 "dma_device_type": 1 00:19:22.648 }, 00:19:22.648 { 00:19:22.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.648 "dma_device_type": 2 00:19:22.648 } 00:19:22.648 ], 00:19:22.648 "driver_specific": { 00:19:22.648 "passthru": { 00:19:22.648 "name": "pt3", 00:19:22.648 "base_bdev_name": "malloc3" 00:19:22.648 } 00:19:22.648 } 00:19:22.648 }' 00:19:22.648 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.648 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.648 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.648 15:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:22.648 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:22.907 [2024-07-23 15:13:18.316344] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.167 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=68b0b029-a470-4b76-b1d2-00115875e584 00:19:23.167 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 68b0b029-a470-4b76-b1d2-00115875e584 ']' 00:19:23.167 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:23.167 [2024-07-23 15:13:18.588096] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.167 [2024-07-23 15:13:18.588138] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.167 [2024-07-23 15:13:18.588262] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.167 [2024-07-23 15:13:18.588346] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.167 [2024-07-23 15:13:18.588368] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name raid_bdev1, state offline 00:19:23.426 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:23.426 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.686 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:23.686 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:23.686 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:23.686 15:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:23.686 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:23.686 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:23.944 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:23.944 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:24.202 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:24.202 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:24.462 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:24.721 [2024-07-23 15:13:19.976389] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:24.721 [2024-07-23 15:13:19.978556] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:24.721 [2024-07-23 15:13:19.978616] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:24.721 [2024-07-23 15:13:19.978671] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:24.721 [2024-07-23 15:13:19.978730] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:24.721 [2024-07-23 15:13:19.978776] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:24.721 [2024-07-23 15:13:19.978794] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.721 [2024-07-23 15:13:19.978821] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state configuring 00:19:24.721 request: 00:19:24.721 { 00:19:24.721 "name": "raid_bdev1", 00:19:24.721 "raid_level": "raid1", 00:19:24.721 "base_bdevs": [ 00:19:24.721 "malloc1", 00:19:24.721 "malloc2", 00:19:24.721 "malloc3" 00:19:24.721 ], 00:19:24.721 "superblock": false, 00:19:24.721 "method": "bdev_raid_create", 00:19:24.721 "req_id": 1 00:19:24.721 } 00:19:24.721 Got JSON-RPC error response 00:19:24.721 response: 00:19:24.721 { 00:19:24.721 "code": -17, 00:19:24.721 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:24.721 } 00:19:24.721 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:24.721 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:24.721 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:24.721 15:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:24.721 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:24.721 15:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.980 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:24.980 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:24.980 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:24.980 [2024-07-23 15:13:20.400363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:24.980 [2024-07-23 15:13:20.400437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.980 [2024-07-23 15:13:20.400459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:19:24.980 [2024-07-23 15:13:20.400473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.980 [2024-07-23 15:13:20.402935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.980 [2024-07-23 15:13:20.402977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:24.980 [2024-07-23 15:13:20.403055] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:24.980 [2024-07-23 15:13:20.403100] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:24.980 pt1 00:19:25.239 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:25.239 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.240 "name": "raid_bdev1", 00:19:25.240 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:25.240 "strip_size_kb": 0, 00:19:25.240 "state": "configuring", 00:19:25.240 "raid_level": "raid1", 00:19:25.240 "superblock": true, 00:19:25.240 "num_base_bdevs": 3, 00:19:25.240 "num_base_bdevs_discovered": 1, 00:19:25.240 "num_base_bdevs_operational": 3, 00:19:25.240 "base_bdevs_list": [ 00:19:25.240 { 00:19:25.240 "name": "pt1", 00:19:25.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:25.240 "is_configured": true, 00:19:25.240 "data_offset": 2048, 00:19:25.240 "data_size": 63488 00:19:25.240 }, 00:19:25.240 { 00:19:25.240 "name": null, 00:19:25.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:25.240 "is_configured": false, 00:19:25.240 "data_offset": 2048, 00:19:25.240 "data_size": 63488 00:19:25.240 }, 00:19:25.240 { 00:19:25.240 "name": null, 00:19:25.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:25.240 "is_configured": false, 00:19:25.240 "data_offset": 2048, 00:19:25.240 "data_size": 63488 00:19:25.240 } 00:19:25.240 ] 00:19:25.240 }' 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.240 15:13:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.498 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:19:25.498 15:13:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:25.757 [2024-07-23 15:13:21.072514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:25.757 [2024-07-23 15:13:21.072599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.757 [2024-07-23 15:13:21.072624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:19:25.757 [2024-07-23 15:13:21.072639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.757 [2024-07-23 15:13:21.073078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.757 [2024-07-23 15:13:21.073113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:25.757 [2024-07-23 15:13:21.073187] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:25.757 [2024-07-23 15:13:21.073213] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:25.757 pt2 00:19:25.757 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:26.017 [2024-07-23 15:13:21.240625] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.017 "name": "raid_bdev1", 00:19:26.017 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:26.017 "strip_size_kb": 0, 00:19:26.017 "state": "configuring", 00:19:26.017 "raid_level": "raid1", 00:19:26.017 "superblock": true, 00:19:26.017 "num_base_bdevs": 3, 00:19:26.017 "num_base_bdevs_discovered": 1, 00:19:26.017 "num_base_bdevs_operational": 3, 00:19:26.017 "base_bdevs_list": [ 00:19:26.017 { 00:19:26.017 "name": "pt1", 00:19:26.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:26.017 "is_configured": true, 00:19:26.017 "data_offset": 2048, 00:19:26.017 "data_size": 63488 00:19:26.017 }, 00:19:26.017 { 00:19:26.017 "name": null, 00:19:26.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:26.017 "is_configured": false, 00:19:26.017 "data_offset": 2048, 00:19:26.017 "data_size": 63488 00:19:26.017 }, 00:19:26.017 { 00:19:26.017 "name": null, 00:19:26.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:26.017 "is_configured": false, 00:19:26.017 "data_offset": 2048, 00:19:26.017 "data_size": 63488 00:19:26.017 } 00:19:26.017 ] 00:19:26.017 }' 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.017 15:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.276 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:26.276 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:26.276 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:26.535 [2024-07-23 15:13:21.936706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:26.535 [2024-07-23 15:13:21.936798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.535 [2024-07-23 15:13:21.936825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:19:26.535 [2024-07-23 15:13:21.936837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.535 [2024-07-23 15:13:21.937260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.535 [2024-07-23 15:13:21.937288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:26.535 [2024-07-23 15:13:21.937366] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:26.535 [2024-07-23 15:13:21.937388] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:26.535 pt2 00:19:26.535 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:26.535 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:26.535 15:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:26.794 [2024-07-23 15:13:22.220748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:26.794 [2024-07-23 15:13:22.220837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.794 [2024-07-23 15:13:22.220863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:19:26.794 [2024-07-23 15:13:22.220875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.794 [2024-07-23 15:13:22.221298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.794 [2024-07-23 15:13:22.221327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:26.794 [2024-07-23 15:13:22.221407] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:26.794 [2024-07-23 15:13:22.221430] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:26.794 [2024-07-23 15:13:22.221548] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:19:26.794 [2024-07-23 15:13:22.221558] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:26.794 [2024-07-23 15:13:22.221627] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:19:26.794 [2024-07-23 15:13:22.221950] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:19:26.794 [2024-07-23 15:13:22.221976] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:19:26.794 [2024-07-23 15:13:22.222102] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.794 pt3 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.053 "name": "raid_bdev1", 00:19:27.053 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:27.053 "strip_size_kb": 0, 00:19:27.053 "state": "online", 00:19:27.053 "raid_level": "raid1", 00:19:27.053 "superblock": true, 00:19:27.053 "num_base_bdevs": 3, 00:19:27.053 "num_base_bdevs_discovered": 3, 00:19:27.053 "num_base_bdevs_operational": 3, 00:19:27.053 "base_bdevs_list": [ 00:19:27.053 { 00:19:27.053 "name": "pt1", 00:19:27.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.053 "is_configured": true, 00:19:27.053 "data_offset": 2048, 00:19:27.053 "data_size": 63488 00:19:27.053 }, 00:19:27.053 { 00:19:27.053 "name": "pt2", 00:19:27.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.053 "is_configured": true, 00:19:27.053 "data_offset": 2048, 00:19:27.053 "data_size": 63488 00:19:27.053 }, 00:19:27.053 { 00:19:27.053 "name": "pt3", 00:19:27.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.053 "is_configured": true, 00:19:27.053 "data_offset": 2048, 00:19:27.053 "data_size": 63488 00:19:27.053 } 00:19:27.053 ] 00:19:27.053 }' 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.053 15:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.312 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:27.312 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:27.312 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:27.312 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:27.312 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:27.312 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:27.312 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:27.312 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:27.571 [2024-07-23 15:13:22.933177] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.571 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:27.571 "name": "raid_bdev1", 00:19:27.571 "aliases": [ 00:19:27.571 "68b0b029-a470-4b76-b1d2-00115875e584" 00:19:27.571 ], 00:19:27.571 "product_name": "Raid Volume", 00:19:27.571 "block_size": 512, 00:19:27.571 "num_blocks": 63488, 00:19:27.571 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:27.571 "assigned_rate_limits": { 00:19:27.571 "rw_ios_per_sec": 0, 00:19:27.571 "rw_mbytes_per_sec": 0, 00:19:27.571 "r_mbytes_per_sec": 0, 00:19:27.571 "w_mbytes_per_sec": 0 00:19:27.571 }, 00:19:27.571 "claimed": false, 00:19:27.571 "zoned": false, 00:19:27.571 "supported_io_types": { 00:19:27.571 "read": true, 00:19:27.571 "write": true, 00:19:27.571 "unmap": false, 00:19:27.571 "flush": false, 00:19:27.571 "reset": true, 00:19:27.571 "nvme_admin": false, 00:19:27.571 "nvme_io": false, 00:19:27.571 "nvme_io_md": false, 00:19:27.571 "write_zeroes": true, 00:19:27.571 "zcopy": false, 00:19:27.571 "get_zone_info": false, 00:19:27.571 "zone_management": false, 00:19:27.571 "zone_append": false, 00:19:27.571 "compare": false, 00:19:27.571 "compare_and_write": false, 00:19:27.571 "abort": false, 00:19:27.571 "seek_hole": false, 00:19:27.571 "seek_data": false, 00:19:27.571 "copy": false, 00:19:27.571 "nvme_iov_md": false 00:19:27.571 }, 00:19:27.571 "memory_domains": [ 00:19:27.571 { 00:19:27.571 "dma_device_id": "system", 00:19:27.571 "dma_device_type": 1 00:19:27.571 }, 00:19:27.571 { 00:19:27.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.571 "dma_device_type": 2 00:19:27.571 }, 00:19:27.571 { 00:19:27.571 "dma_device_id": "system", 00:19:27.571 "dma_device_type": 1 00:19:27.571 }, 00:19:27.571 { 00:19:27.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.571 "dma_device_type": 2 00:19:27.571 }, 00:19:27.571 { 00:19:27.571 "dma_device_id": "system", 00:19:27.571 "dma_device_type": 1 00:19:27.571 }, 00:19:27.571 { 00:19:27.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.571 "dma_device_type": 2 00:19:27.571 } 00:19:27.571 ], 00:19:27.571 "driver_specific": { 00:19:27.571 "raid": { 00:19:27.571 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:27.571 "strip_size_kb": 0, 00:19:27.571 "state": "online", 00:19:27.571 "raid_level": "raid1", 00:19:27.571 "superblock": true, 00:19:27.571 "num_base_bdevs": 3, 00:19:27.571 "num_base_bdevs_discovered": 3, 00:19:27.571 "num_base_bdevs_operational": 3, 00:19:27.571 "base_bdevs_list": [ 00:19:27.571 { 00:19:27.571 "name": "pt1", 00:19:27.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.571 "is_configured": true, 00:19:27.571 "data_offset": 2048, 00:19:27.571 "data_size": 63488 00:19:27.571 }, 00:19:27.571 { 00:19:27.571 "name": "pt2", 00:19:27.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.571 "is_configured": true, 00:19:27.571 "data_offset": 2048, 00:19:27.571 "data_size": 63488 00:19:27.571 }, 00:19:27.571 { 00:19:27.571 "name": "pt3", 00:19:27.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.571 "is_configured": true, 00:19:27.571 "data_offset": 2048, 00:19:27.571 "data_size": 63488 00:19:27.571 } 00:19:27.571 ] 00:19:27.571 } 00:19:27.571 } 00:19:27.571 }' 00:19:27.571 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.571 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:27.571 pt2 00:19:27.571 pt3' 00:19:27.571 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:27.571 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:27.571 15:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:27.830 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:27.830 "name": "pt1", 00:19:27.830 "aliases": [ 00:19:27.830 "00000000-0000-0000-0000-000000000001" 00:19:27.830 ], 00:19:27.830 "product_name": "passthru", 00:19:27.830 "block_size": 512, 00:19:27.830 "num_blocks": 65536, 00:19:27.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.830 "assigned_rate_limits": { 00:19:27.830 "rw_ios_per_sec": 0, 00:19:27.830 "rw_mbytes_per_sec": 0, 00:19:27.830 "r_mbytes_per_sec": 0, 00:19:27.830 "w_mbytes_per_sec": 0 00:19:27.830 }, 00:19:27.830 "claimed": true, 00:19:27.830 "claim_type": "exclusive_write", 00:19:27.830 "zoned": false, 00:19:27.830 "supported_io_types": { 00:19:27.830 "read": true, 00:19:27.830 "write": true, 00:19:27.830 "unmap": true, 00:19:27.830 "flush": true, 00:19:27.830 "reset": true, 00:19:27.830 "nvme_admin": false, 00:19:27.830 "nvme_io": false, 00:19:27.830 "nvme_io_md": false, 00:19:27.830 "write_zeroes": true, 00:19:27.830 "zcopy": true, 00:19:27.830 "get_zone_info": false, 00:19:27.830 "zone_management": false, 00:19:27.830 "zone_append": false, 00:19:27.830 "compare": false, 00:19:27.830 "compare_and_write": false, 00:19:27.830 "abort": true, 00:19:27.830 "seek_hole": false, 00:19:27.830 "seek_data": false, 00:19:27.830 "copy": true, 00:19:27.830 "nvme_iov_md": false 00:19:27.830 }, 00:19:27.830 "memory_domains": [ 00:19:27.830 { 00:19:27.830 "dma_device_id": "system", 00:19:27.830 "dma_device_type": 1 00:19:27.830 }, 00:19:27.830 { 00:19:27.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.830 "dma_device_type": 2 00:19:27.830 } 00:19:27.830 ], 00:19:27.830 "driver_specific": { 00:19:27.830 "passthru": { 00:19:27.830 "name": "pt1", 00:19:27.830 "base_bdev_name": "malloc1" 00:19:27.830 } 00:19:27.830 } 00:19:27.830 }' 00:19:27.830 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.830 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.830 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:27.830 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.830 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.089 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:28.089 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.089 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:28.090 "name": "pt2", 00:19:28.090 "aliases": [ 00:19:28.090 "00000000-0000-0000-0000-000000000002" 00:19:28.090 ], 00:19:28.090 "product_name": "passthru", 00:19:28.090 "block_size": 512, 00:19:28.090 "num_blocks": 65536, 00:19:28.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.090 "assigned_rate_limits": { 00:19:28.090 "rw_ios_per_sec": 0, 00:19:28.090 "rw_mbytes_per_sec": 0, 00:19:28.090 "r_mbytes_per_sec": 0, 00:19:28.090 "w_mbytes_per_sec": 0 00:19:28.090 }, 00:19:28.090 "claimed": true, 00:19:28.090 "claim_type": "exclusive_write", 00:19:28.090 "zoned": false, 00:19:28.090 "supported_io_types": { 00:19:28.090 "read": true, 00:19:28.090 "write": true, 00:19:28.090 "unmap": true, 00:19:28.090 "flush": true, 00:19:28.090 "reset": true, 00:19:28.090 "nvme_admin": false, 00:19:28.090 "nvme_io": false, 00:19:28.090 "nvme_io_md": false, 00:19:28.090 "write_zeroes": true, 00:19:28.090 "zcopy": true, 00:19:28.090 "get_zone_info": false, 00:19:28.090 "zone_management": false, 00:19:28.090 "zone_append": false, 00:19:28.090 "compare": false, 00:19:28.090 "compare_and_write": false, 00:19:28.090 "abort": true, 00:19:28.090 "seek_hole": false, 00:19:28.090 "seek_data": false, 00:19:28.090 "copy": true, 00:19:28.090 "nvme_iov_md": false 00:19:28.090 }, 00:19:28.090 "memory_domains": [ 00:19:28.090 { 00:19:28.090 "dma_device_id": "system", 00:19:28.090 "dma_device_type": 1 00:19:28.090 }, 00:19:28.090 { 00:19:28.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.090 "dma_device_type": 2 00:19:28.090 } 00:19:28.090 ], 00:19:28.090 "driver_specific": { 00:19:28.090 "passthru": { 00:19:28.090 "name": "pt2", 00:19:28.090 "base_bdev_name": "malloc2" 00:19:28.090 } 00:19:28.090 } 00:19:28.090 }' 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:28.090 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:28.350 "name": "pt3", 00:19:28.350 "aliases": [ 00:19:28.350 "00000000-0000-0000-0000-000000000003" 00:19:28.350 ], 00:19:28.350 "product_name": "passthru", 00:19:28.350 "block_size": 512, 00:19:28.350 "num_blocks": 65536, 00:19:28.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:28.350 "assigned_rate_limits": { 00:19:28.350 "rw_ios_per_sec": 0, 00:19:28.350 "rw_mbytes_per_sec": 0, 00:19:28.350 "r_mbytes_per_sec": 0, 00:19:28.350 "w_mbytes_per_sec": 0 00:19:28.350 }, 00:19:28.350 "claimed": true, 00:19:28.350 "claim_type": "exclusive_write", 00:19:28.350 "zoned": false, 00:19:28.350 "supported_io_types": { 00:19:28.350 "read": true, 00:19:28.350 "write": true, 00:19:28.350 "unmap": true, 00:19:28.350 "flush": true, 00:19:28.350 "reset": true, 00:19:28.350 "nvme_admin": false, 00:19:28.350 "nvme_io": false, 00:19:28.350 "nvme_io_md": false, 00:19:28.350 "write_zeroes": true, 00:19:28.350 "zcopy": true, 00:19:28.350 "get_zone_info": false, 00:19:28.350 "zone_management": false, 00:19:28.350 "zone_append": false, 00:19:28.350 "compare": false, 00:19:28.350 "compare_and_write": false, 00:19:28.350 "abort": true, 00:19:28.350 "seek_hole": false, 00:19:28.350 "seek_data": false, 00:19:28.350 "copy": true, 00:19:28.350 "nvme_iov_md": false 00:19:28.350 }, 00:19:28.350 "memory_domains": [ 00:19:28.350 { 00:19:28.350 "dma_device_id": "system", 00:19:28.350 "dma_device_type": 1 00:19:28.350 }, 00:19:28.350 { 00:19:28.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.350 "dma_device_type": 2 00:19:28.350 } 00:19:28.350 ], 00:19:28.350 "driver_specific": { 00:19:28.350 "passthru": { 00:19:28.350 "name": "pt3", 00:19:28.350 "base_bdev_name": "malloc3" 00:19:28.350 } 00:19:28.350 } 00:19:28.350 }' 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:28.350 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.609 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:28.609 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:28.609 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.609 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:28.609 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:28.609 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.610 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:28.610 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.610 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:28.610 15:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:28.869 [2024-07-23 15:13:24.097419] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.869 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 68b0b029-a470-4b76-b1d2-00115875e584 '!=' 68b0b029-a470-4b76-b1d2-00115875e584 ']' 00:19:28.869 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:19:28.869 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:28.869 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:28.869 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:28.869 [2024-07-23 15:13:24.289234] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.128 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.399 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.399 "name": "raid_bdev1", 00:19:29.399 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:29.399 "strip_size_kb": 0, 00:19:29.399 "state": "online", 00:19:29.399 "raid_level": "raid1", 00:19:29.399 "superblock": true, 00:19:29.399 "num_base_bdevs": 3, 00:19:29.399 "num_base_bdevs_discovered": 2, 00:19:29.399 "num_base_bdevs_operational": 2, 00:19:29.399 "base_bdevs_list": [ 00:19:29.399 { 00:19:29.399 "name": null, 00:19:29.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.399 "is_configured": false, 00:19:29.399 "data_offset": 2048, 00:19:29.399 "data_size": 63488 00:19:29.399 }, 00:19:29.399 { 00:19:29.399 "name": "pt2", 00:19:29.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.399 "is_configured": true, 00:19:29.399 "data_offset": 2048, 00:19:29.399 "data_size": 63488 00:19:29.399 }, 00:19:29.399 { 00:19:29.399 "name": "pt3", 00:19:29.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.399 "is_configured": true, 00:19:29.399 "data_offset": 2048, 00:19:29.399 "data_size": 63488 00:19:29.399 } 00:19:29.399 ] 00:19:29.399 }' 00:19:29.399 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.399 15:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.688 15:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:29.947 [2024-07-23 15:13:25.145326] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.947 [2024-07-23 15:13:25.145373] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.947 [2024-07-23 15:13:25.145449] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.947 [2024-07-23 15:13:25.145516] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.947 [2024-07-23 15:13:25.145528] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:19:29.947 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.947 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:19:30.206 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:19:30.206 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:19:30.206 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:19:30.206 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:30.206 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:30.206 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:19:30.206 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:30.206 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:30.465 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:19:30.465 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:19:30.465 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:19:30.465 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:19:30.465 15:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:30.724 [2024-07-23 15:13:25.989483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:30.724 [2024-07-23 15:13:25.989565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.724 [2024-07-23 15:13:25.989589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:19:30.724 [2024-07-23 15:13:25.989602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.724 [2024-07-23 15:13:25.992058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.724 [2024-07-23 15:13:25.992100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:30.724 [2024-07-23 15:13:25.992178] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:30.724 [2024-07-23 15:13:25.992209] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:30.724 pt2 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.724 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.984 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.984 "name": "raid_bdev1", 00:19:30.984 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:30.984 "strip_size_kb": 0, 00:19:30.984 "state": "configuring", 00:19:30.984 "raid_level": "raid1", 00:19:30.984 "superblock": true, 00:19:30.984 "num_base_bdevs": 3, 00:19:30.984 "num_base_bdevs_discovered": 1, 00:19:30.984 "num_base_bdevs_operational": 2, 00:19:30.984 "base_bdevs_list": [ 00:19:30.984 { 00:19:30.984 "name": null, 00:19:30.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.984 "is_configured": false, 00:19:30.984 "data_offset": 2048, 00:19:30.984 "data_size": 63488 00:19:30.984 }, 00:19:30.984 { 00:19:30.984 "name": "pt2", 00:19:30.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.984 "is_configured": true, 00:19:30.984 "data_offset": 2048, 00:19:30.984 "data_size": 63488 00:19:30.984 }, 00:19:30.984 { 00:19:30.984 "name": null, 00:19:30.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.984 "is_configured": false, 00:19:30.984 "data_offset": 2048, 00:19:30.984 "data_size": 63488 00:19:30.984 } 00:19:30.984 ] 00:19:30.984 }' 00:19:30.984 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.984 15:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.243 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:19:31.243 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:19:31.243 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:19:31.243 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:31.502 [2024-07-23 15:13:26.733721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:31.502 [2024-07-23 15:13:26.733814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.502 [2024-07-23 15:13:26.733842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:19:31.502 [2024-07-23 15:13:26.733854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.502 [2024-07-23 15:13:26.734267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.502 [2024-07-23 15:13:26.734312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:31.502 [2024-07-23 15:13:26.734394] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:31.502 [2024-07-23 15:13:26.734424] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:31.502 [2024-07-23 15:13:26.734533] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:19:31.502 [2024-07-23 15:13:26.734544] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:31.502 [2024-07-23 15:13:26.734616] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:19:31.503 [2024-07-23 15:13:26.734965] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:19:31.503 [2024-07-23 15:13:26.734991] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:19:31.503 [2024-07-23 15:13:26.735090] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.503 pt3 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.503 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.762 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.762 "name": "raid_bdev1", 00:19:31.762 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:31.762 "strip_size_kb": 0, 00:19:31.762 "state": "online", 00:19:31.762 "raid_level": "raid1", 00:19:31.762 "superblock": true, 00:19:31.762 "num_base_bdevs": 3, 00:19:31.762 "num_base_bdevs_discovered": 2, 00:19:31.762 "num_base_bdevs_operational": 2, 00:19:31.762 "base_bdevs_list": [ 00:19:31.762 { 00:19:31.762 "name": null, 00:19:31.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.762 "is_configured": false, 00:19:31.762 "data_offset": 2048, 00:19:31.762 "data_size": 63488 00:19:31.762 }, 00:19:31.762 { 00:19:31.762 "name": "pt2", 00:19:31.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.762 "is_configured": true, 00:19:31.762 "data_offset": 2048, 00:19:31.762 "data_size": 63488 00:19:31.762 }, 00:19:31.762 { 00:19:31.762 "name": "pt3", 00:19:31.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:31.762 "is_configured": true, 00:19:31.762 "data_offset": 2048, 00:19:31.762 "data_size": 63488 00:19:31.762 } 00:19:31.762 ] 00:19:31.762 }' 00:19:31.762 15:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.762 15:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.021 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:32.021 [2024-07-23 15:13:27.361718] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.021 [2024-07-23 15:13:27.361764] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.021 [2024-07-23 15:13:27.361856] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.021 [2024-07-23 15:13:27.361914] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.021 [2024-07-23 15:13:27.361929] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:19:32.021 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.021 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:19:32.280 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:19:32.280 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:19:32.280 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:19:32.280 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:19:32.280 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:32.539 [2024-07-23 15:13:27.893829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:32.539 [2024-07-23 15:13:27.893909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.539 [2024-07-23 15:13:27.893931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:19:32.539 [2024-07-23 15:13:27.893950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.539 [2024-07-23 15:13:27.896401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.539 [2024-07-23 15:13:27.896450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:32.539 [2024-07-23 15:13:27.896527] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:32.539 [2024-07-23 15:13:27.896562] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:32.539 [2024-07-23 15:13:27.896673] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:32.539 [2024-07-23 15:13:27.896700] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.539 [2024-07-23 15:13:27.896723] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state configuring 00:19:32.539 [2024-07-23 15:13:27.896760] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:32.539 pt1 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.539 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.540 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.540 15:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.799 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.799 "name": "raid_bdev1", 00:19:32.799 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:32.799 "strip_size_kb": 0, 00:19:32.799 "state": "configuring", 00:19:32.799 "raid_level": "raid1", 00:19:32.799 "superblock": true, 00:19:32.799 "num_base_bdevs": 3, 00:19:32.799 "num_base_bdevs_discovered": 1, 00:19:32.799 "num_base_bdevs_operational": 2, 00:19:32.799 "base_bdevs_list": [ 00:19:32.799 { 00:19:32.799 "name": null, 00:19:32.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.799 "is_configured": false, 00:19:32.799 "data_offset": 2048, 00:19:32.799 "data_size": 63488 00:19:32.799 }, 00:19:32.799 { 00:19:32.799 "name": "pt2", 00:19:32.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.799 "is_configured": true, 00:19:32.799 "data_offset": 2048, 00:19:32.799 "data_size": 63488 00:19:32.799 }, 00:19:32.799 { 00:19:32.799 "name": null, 00:19:32.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:32.799 "is_configured": false, 00:19:32.799 "data_offset": 2048, 00:19:32.799 "data_size": 63488 00:19:32.799 } 00:19:32.799 ] 00:19:32.799 }' 00:19:32.799 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.799 15:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.059 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:19:33.059 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:33.317 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:19:33.317 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:33.577 [2024-07-23 15:13:28.826043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:33.577 [2024-07-23 15:13:28.826271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.577 [2024-07-23 15:13:28.826328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:19:33.577 [2024-07-23 15:13:28.826419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.577 [2024-07-23 15:13:28.826898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.577 [2024-07-23 15:13:28.826927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:33.577 [2024-07-23 15:13:28.827003] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:33.577 [2024-07-23 15:13:28.827033] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:33.577 [2024-07-23 15:13:28.827138] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:19:33.577 [2024-07-23 15:13:28.827151] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:33.577 [2024-07-23 15:13:28.827215] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:19:33.577 [2024-07-23 15:13:28.827506] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:19:33.577 [2024-07-23 15:13:28.827518] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:19:33.577 [2024-07-23 15:13:28.827614] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.577 pt3 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.577 15:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.835 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.835 "name": "raid_bdev1", 00:19:33.835 "uuid": "68b0b029-a470-4b76-b1d2-00115875e584", 00:19:33.835 "strip_size_kb": 0, 00:19:33.835 "state": "online", 00:19:33.835 "raid_level": "raid1", 00:19:33.835 "superblock": true, 00:19:33.835 "num_base_bdevs": 3, 00:19:33.835 "num_base_bdevs_discovered": 2, 00:19:33.835 "num_base_bdevs_operational": 2, 00:19:33.835 "base_bdevs_list": [ 00:19:33.835 { 00:19:33.835 "name": null, 00:19:33.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.835 "is_configured": false, 00:19:33.835 "data_offset": 2048, 00:19:33.835 "data_size": 63488 00:19:33.835 }, 00:19:33.835 { 00:19:33.835 "name": "pt2", 00:19:33.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.835 "is_configured": true, 00:19:33.835 "data_offset": 2048, 00:19:33.835 "data_size": 63488 00:19:33.835 }, 00:19:33.835 { 00:19:33.835 "name": "pt3", 00:19:33.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:33.835 "is_configured": true, 00:19:33.835 "data_offset": 2048, 00:19:33.835 "data_size": 63488 00:19:33.835 } 00:19:33.835 ] 00:19:33.835 }' 00:19:33.835 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.835 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.095 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:34.095 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:34.354 [2024-07-23 15:13:29.698465] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 68b0b029-a470-4b76-b1d2-00115875e584 '!=' 68b0b029-a470-4b76-b1d2-00115875e584 ']' 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 97898 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 97898 ']' 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 97898 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97898 00:19:34.354 killing process with pid 97898 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:34.354 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:34.355 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97898' 00:19:34.355 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 97898 00:19:34.355 [2024-07-23 15:13:29.752137] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.355 15:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 97898 00:19:34.355 [2024-07-23 15:13:29.752232] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.355 [2024-07-23 15:13:29.752294] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.355 [2024-07-23 15:13:29.752305] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:19:34.614 [2024-07-23 15:13:29.787913] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.614 15:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:34.614 00:19:34.614 real 0m16.097s 00:19:34.614 user 0m27.838s 00:19:34.614 sys 0m3.517s 00:19:34.614 ************************************ 00:19:34.614 END TEST raid_superblock_test 00:19:34.614 ************************************ 00:19:34.614 15:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.614 15:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.872 15:13:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:34.872 15:13:30 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:19:34.872 15:13:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:34.872 15:13:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.872 15:13:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.872 ************************************ 00:19:34.872 START TEST raid_read_error_test 00:19:34.872 ************************************ 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ianLjAcpYv 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=98548 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 98548 /var/tmp/spdk-raid.sock 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 98548 ']' 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:34.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.873 15:13:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:34.873 [2024-07-23 15:13:30.180143] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:19:34.873 [2024-07-23 15:13:30.180357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98548 ] 00:19:35.132 [2024-07-23 15:13:30.332672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.132 [2024-07-23 15:13:30.388713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.132 [2024-07-23 15:13:30.441866] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.699 15:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.699 15:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:35.699 15:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:35.699 15:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:36.006 BaseBdev1_malloc 00:19:36.006 15:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:36.271 true 00:19:36.272 15:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:36.272 [2024-07-23 15:13:31.648489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:36.272 [2024-07-23 15:13:31.648580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.272 [2024-07-23 15:13:31.648612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:19:36.272 [2024-07-23 15:13:31.648625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.272 [2024-07-23 15:13:31.651261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.272 [2024-07-23 15:13:31.651304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:36.272 BaseBdev1 00:19:36.272 15:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:36.272 15:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:36.530 BaseBdev2_malloc 00:19:36.530 15:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:36.789 true 00:19:36.789 15:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:37.048 [2024-07-23 15:13:32.234334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:37.048 [2024-07-23 15:13:32.234417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.048 [2024-07-23 15:13:32.234449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:19:37.048 [2024-07-23 15:13:32.234461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.048 [2024-07-23 15:13:32.236968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.048 [2024-07-23 15:13:32.237009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:37.048 BaseBdev2 00:19:37.048 15:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:37.048 15:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:37.048 BaseBdev3_malloc 00:19:37.048 15:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:37.306 true 00:19:37.306 15:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:37.565 [2024-07-23 15:13:32.804563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:37.565 [2024-07-23 15:13:32.804620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.565 [2024-07-23 15:13:32.804643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:19:37.565 [2024-07-23 15:13:32.804655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.565 [2024-07-23 15:13:32.807098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.565 [2024-07-23 15:13:32.807140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:37.565 BaseBdev3 00:19:37.565 15:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:37.823 [2024-07-23 15:13:33.036702] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:37.823 [2024-07-23 15:13:33.039102] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.823 [2024-07-23 15:13:33.039192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:37.823 [2024-07-23 15:13:33.039390] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:19:37.823 [2024-07-23 15:13:33.039417] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:37.823 [2024-07-23 15:13:33.039521] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:19:37.823 [2024-07-23 15:13:33.039888] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:19:37.823 [2024-07-23 15:13:33.039911] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:19:37.823 [2024-07-23 15:13:33.040042] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.823 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:37.823 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:37.823 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:37.823 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:37.823 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:37.823 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.824 "name": "raid_bdev1", 00:19:37.824 "uuid": "36317427-3754-4d7f-8159-3c711b30f7ee", 00:19:37.824 "strip_size_kb": 0, 00:19:37.824 "state": "online", 00:19:37.824 "raid_level": "raid1", 00:19:37.824 "superblock": true, 00:19:37.824 "num_base_bdevs": 3, 00:19:37.824 "num_base_bdevs_discovered": 3, 00:19:37.824 "num_base_bdevs_operational": 3, 00:19:37.824 "base_bdevs_list": [ 00:19:37.824 { 00:19:37.824 "name": "BaseBdev1", 00:19:37.824 "uuid": "14029b6f-7f1a-53ff-a885-35799cfacff1", 00:19:37.824 "is_configured": true, 00:19:37.824 "data_offset": 2048, 00:19:37.824 "data_size": 63488 00:19:37.824 }, 00:19:37.824 { 00:19:37.824 "name": "BaseBdev2", 00:19:37.824 "uuid": "0d0a1897-3df3-5ace-b7c6-c6c0afb54f94", 00:19:37.824 "is_configured": true, 00:19:37.824 "data_offset": 2048, 00:19:37.824 "data_size": 63488 00:19:37.824 }, 00:19:37.824 { 00:19:37.824 "name": "BaseBdev3", 00:19:37.824 "uuid": "1cc7aa22-0f8e-556b-a1f8-2861350a0ce6", 00:19:37.824 "is_configured": true, 00:19:37.824 "data_offset": 2048, 00:19:37.824 "data_size": 63488 00:19:37.824 } 00:19:37.824 ] 00:19:37.824 }' 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.824 15:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.082 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:38.082 15:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:38.341 [2024-07-23 15:13:33.581235] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:19:39.277 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:39.535 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.536 "name": "raid_bdev1", 00:19:39.536 "uuid": "36317427-3754-4d7f-8159-3c711b30f7ee", 00:19:39.536 "strip_size_kb": 0, 00:19:39.536 "state": "online", 00:19:39.536 "raid_level": "raid1", 00:19:39.536 "superblock": true, 00:19:39.536 "num_base_bdevs": 3, 00:19:39.536 "num_base_bdevs_discovered": 3, 00:19:39.536 "num_base_bdevs_operational": 3, 00:19:39.536 "base_bdevs_list": [ 00:19:39.536 { 00:19:39.536 "name": "BaseBdev1", 00:19:39.536 "uuid": "14029b6f-7f1a-53ff-a885-35799cfacff1", 00:19:39.536 "is_configured": true, 00:19:39.536 "data_offset": 2048, 00:19:39.536 "data_size": 63488 00:19:39.536 }, 00:19:39.536 { 00:19:39.536 "name": "BaseBdev2", 00:19:39.536 "uuid": "0d0a1897-3df3-5ace-b7c6-c6c0afb54f94", 00:19:39.536 "is_configured": true, 00:19:39.536 "data_offset": 2048, 00:19:39.536 "data_size": 63488 00:19:39.536 }, 00:19:39.536 { 00:19:39.536 "name": "BaseBdev3", 00:19:39.536 "uuid": "1cc7aa22-0f8e-556b-a1f8-2861350a0ce6", 00:19:39.536 "is_configured": true, 00:19:39.536 "data_offset": 2048, 00:19:39.536 "data_size": 63488 00:19:39.536 } 00:19:39.536 ] 00:19:39.536 }' 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.536 15:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:40.104 [2024-07-23 15:13:35.415215] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.104 [2024-07-23 15:13:35.415270] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.104 [2024-07-23 15:13:35.417760] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.104 [2024-07-23 15:13:35.417824] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.104 [2024-07-23 15:13:35.417936] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.104 [2024-07-23 15:13:35.417952] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:19:40.104 0 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 98548 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 98548 ']' 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 98548 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98548 00:19:40.104 killing process with pid 98548 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98548' 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 98548 00:19:40.104 [2024-07-23 15:13:35.463659] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.104 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 98548 00:19:40.104 [2024-07-23 15:13:35.489160] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ianLjAcpYv 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:40.362 00:19:40.362 real 0m5.632s 00:19:40.362 user 0m8.379s 00:19:40.362 sys 0m1.054s 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:40.362 ************************************ 00:19:40.362 END TEST raid_read_error_test 00:19:40.362 15:13:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.362 ************************************ 00:19:40.362 15:13:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:40.362 15:13:35 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:19:40.362 15:13:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:40.362 15:13:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.362 15:13:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.621 ************************************ 00:19:40.621 START TEST raid_write_error_test 00:19:40.621 ************************************ 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:40.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.atyZlfucuC 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=98710 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 98710 /var/tmp/spdk-raid.sock 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 98710 ']' 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.621 15:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.621 [2024-07-23 15:13:35.874389] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:19:40.622 [2024-07-23 15:13:35.874765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98710 ] 00:19:40.622 [2024-07-23 15:13:36.027004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.881 [2024-07-23 15:13:36.073767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.881 [2024-07-23 15:13:36.119519] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.447 15:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.447 15:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:41.447 15:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:41.447 15:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:41.704 BaseBdev1_malloc 00:19:41.704 15:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:41.704 true 00:19:41.704 15:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:41.963 [2024-07-23 15:13:37.263380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:41.963 [2024-07-23 15:13:37.263467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.963 [2024-07-23 15:13:37.263500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:19:41.963 [2024-07-23 15:13:37.263521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.963 [2024-07-23 15:13:37.266150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.963 [2024-07-23 15:13:37.266193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:41.963 BaseBdev1 00:19:41.963 15:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:41.963 15:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:42.222 BaseBdev2_malloc 00:19:42.222 15:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:42.222 true 00:19:42.222 15:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:42.480 [2024-07-23 15:13:37.789015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:42.480 [2024-07-23 15:13:37.789273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.480 [2024-07-23 15:13:37.789338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:19:42.480 [2024-07-23 15:13:37.789422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.480 [2024-07-23 15:13:37.791984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.480 [2024-07-23 15:13:37.792130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:42.480 BaseBdev2 00:19:42.480 15:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:42.480 15:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:42.739 BaseBdev3_malloc 00:19:42.739 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:42.997 true 00:19:42.997 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:42.997 [2024-07-23 15:13:38.340326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:42.997 [2024-07-23 15:13:38.340403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.997 [2024-07-23 15:13:38.340433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:19:42.997 [2024-07-23 15:13:38.340445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.997 [2024-07-23 15:13:38.343122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.997 [2024-07-23 15:13:38.343164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:42.997 BaseBdev3 00:19:42.997 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:43.257 [2024-07-23 15:13:38.512418] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.257 [2024-07-23 15:13:38.514642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.257 [2024-07-23 15:13:38.514725] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.257 [2024-07-23 15:13:38.514948] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:19:43.257 [2024-07-23 15:13:38.514969] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:43.257 [2024-07-23 15:13:38.515071] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:19:43.257 [2024-07-23 15:13:38.515434] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:19:43.257 [2024-07-23 15:13:38.515447] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:19:43.257 [2024-07-23 15:13:38.515578] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.257 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.516 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.516 "name": "raid_bdev1", 00:19:43.516 "uuid": "cd8ff870-ded5-40b6-8aaa-5caef8751b4f", 00:19:43.516 "strip_size_kb": 0, 00:19:43.516 "state": "online", 00:19:43.516 "raid_level": "raid1", 00:19:43.516 "superblock": true, 00:19:43.516 "num_base_bdevs": 3, 00:19:43.516 "num_base_bdevs_discovered": 3, 00:19:43.516 "num_base_bdevs_operational": 3, 00:19:43.516 "base_bdevs_list": [ 00:19:43.516 { 00:19:43.516 "name": "BaseBdev1", 00:19:43.516 "uuid": "e9a7cc60-0a7e-53ec-98b7-5df4c0a9eb48", 00:19:43.516 "is_configured": true, 00:19:43.516 "data_offset": 2048, 00:19:43.516 "data_size": 63488 00:19:43.516 }, 00:19:43.516 { 00:19:43.516 "name": "BaseBdev2", 00:19:43.516 "uuid": "21ec0cd6-3db2-5d4f-ad24-a86f2985d32c", 00:19:43.516 "is_configured": true, 00:19:43.516 "data_offset": 2048, 00:19:43.516 "data_size": 63488 00:19:43.516 }, 00:19:43.516 { 00:19:43.516 "name": "BaseBdev3", 00:19:43.516 "uuid": "1a29b101-f34d-5bbf-8381-aef4787c0951", 00:19:43.516 "is_configured": true, 00:19:43.516 "data_offset": 2048, 00:19:43.516 "data_size": 63488 00:19:43.516 } 00:19:43.516 ] 00:19:43.516 }' 00:19:43.516 15:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.516 15:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.804 15:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:43.804 15:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:43.804 [2024-07-23 15:13:39.213015] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:19:44.762 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:45.021 [2024-07-23 15:13:40.316632] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:45.021 [2024-07-23 15:13:40.316717] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.021 [2024-07-23 15:13:40.316950] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d0000021f0 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.021 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.281 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.281 "name": "raid_bdev1", 00:19:45.281 "uuid": "cd8ff870-ded5-40b6-8aaa-5caef8751b4f", 00:19:45.281 "strip_size_kb": 0, 00:19:45.281 "state": "online", 00:19:45.281 "raid_level": "raid1", 00:19:45.281 "superblock": true, 00:19:45.281 "num_base_bdevs": 3, 00:19:45.281 "num_base_bdevs_discovered": 2, 00:19:45.281 "num_base_bdevs_operational": 2, 00:19:45.281 "base_bdevs_list": [ 00:19:45.281 { 00:19:45.281 "name": null, 00:19:45.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.281 "is_configured": false, 00:19:45.281 "data_offset": 2048, 00:19:45.281 "data_size": 63488 00:19:45.281 }, 00:19:45.281 { 00:19:45.281 "name": "BaseBdev2", 00:19:45.281 "uuid": "21ec0cd6-3db2-5d4f-ad24-a86f2985d32c", 00:19:45.281 "is_configured": true, 00:19:45.281 "data_offset": 2048, 00:19:45.281 "data_size": 63488 00:19:45.281 }, 00:19:45.281 { 00:19:45.281 "name": "BaseBdev3", 00:19:45.281 "uuid": "1a29b101-f34d-5bbf-8381-aef4787c0951", 00:19:45.281 "is_configured": true, 00:19:45.281 "data_offset": 2048, 00:19:45.281 "data_size": 63488 00:19:45.281 } 00:19:45.281 ] 00:19:45.281 }' 00:19:45.281 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.281 15:13:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.540 15:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:45.800 [2024-07-23 15:13:41.041669] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:45.800 [2024-07-23 15:13:41.041958] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.800 [2024-07-23 15:13:41.044451] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.800 [2024-07-23 15:13:41.044624] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.800 [2024-07-23 15:13:41.044747] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.800 [2024-07-23 15:13:41.044875] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:19:45.800 0 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 98710 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 98710 ']' 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 98710 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98710 00:19:45.800 killing process with pid 98710 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98710' 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 98710 00:19:45.800 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 98710 00:19:45.800 [2024-07-23 15:13:41.090693] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.800 [2024-07-23 15:13:41.115970] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.atyZlfucuC 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:46.058 00:19:46.058 real 0m5.577s 00:19:46.058 user 0m8.316s 00:19:46.058 sys 0m1.004s 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.058 ************************************ 00:19:46.058 END TEST raid_write_error_test 00:19:46.058 ************************************ 00:19:46.058 15:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 15:13:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:46.058 15:13:41 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:19:46.058 15:13:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:46.058 15:13:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:19:46.058 15:13:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:46.058 15:13:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.059 15:13:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.059 ************************************ 00:19:46.059 START TEST raid_state_function_test 00:19:46.059 ************************************ 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=98873 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:46.059 Process raid pid: 98873 00:19:46.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 98873' 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 98873 /var/tmp/spdk-raid.sock 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 98873 ']' 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.059 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.318 [2024-07-23 15:13:41.491906] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:19:46.318 [2024-07-23 15:13:41.492047] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.318 [2024-07-23 15:13:41.631964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.318 [2024-07-23 15:13:41.680948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.318 [2024-07-23 15:13:41.726765] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.577 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.577 15:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:19:46.578 15:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:46.837 [2024-07-23 15:13:42.017024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:46.837 [2024-07-23 15:13:42.017275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:46.837 [2024-07-23 15:13:42.017374] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:46.837 [2024-07-23 15:13:42.017420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:46.837 [2024-07-23 15:13:42.017454] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:46.837 [2024-07-23 15:13:42.017487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:46.837 [2024-07-23 15:13:42.017569] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:46.837 [2024-07-23 15:13:42.017614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.837 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.096 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.096 "name": "Existed_Raid", 00:19:47.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.096 "strip_size_kb": 64, 00:19:47.096 "state": "configuring", 00:19:47.096 "raid_level": "raid0", 00:19:47.096 "superblock": false, 00:19:47.096 "num_base_bdevs": 4, 00:19:47.096 "num_base_bdevs_discovered": 0, 00:19:47.096 "num_base_bdevs_operational": 4, 00:19:47.096 "base_bdevs_list": [ 00:19:47.096 { 00:19:47.096 "name": "BaseBdev1", 00:19:47.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.096 "is_configured": false, 00:19:47.096 "data_offset": 0, 00:19:47.096 "data_size": 0 00:19:47.096 }, 00:19:47.096 { 00:19:47.096 "name": "BaseBdev2", 00:19:47.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.096 "is_configured": false, 00:19:47.096 "data_offset": 0, 00:19:47.096 "data_size": 0 00:19:47.096 }, 00:19:47.096 { 00:19:47.096 "name": "BaseBdev3", 00:19:47.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.096 "is_configured": false, 00:19:47.096 "data_offset": 0, 00:19:47.096 "data_size": 0 00:19:47.096 }, 00:19:47.096 { 00:19:47.096 "name": "BaseBdev4", 00:19:47.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.096 "is_configured": false, 00:19:47.096 "data_offset": 0, 00:19:47.096 "data_size": 0 00:19:47.096 } 00:19:47.096 ] 00:19:47.096 }' 00:19:47.096 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.096 15:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.355 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:47.613 [2024-07-23 15:13:42.885051] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:47.613 [2024-07-23 15:13:42.885107] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:19:47.613 15:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:47.872 [2024-07-23 15:13:43.153140] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.872 [2024-07-23 15:13:43.153350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.872 [2024-07-23 15:13:43.153457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.872 [2024-07-23 15:13:43.153503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.872 [2024-07-23 15:13:43.153581] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:47.872 [2024-07-23 15:13:43.153623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:47.872 [2024-07-23 15:13:43.153633] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:47.872 [2024-07-23 15:13:43.153646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:47.872 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:48.133 [2024-07-23 15:13:43.338911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:48.133 BaseBdev1 00:19:48.133 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:48.133 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:48.133 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:48.133 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:48.133 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:48.133 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:48.133 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:48.133 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:48.393 [ 00:19:48.393 { 00:19:48.393 "name": "BaseBdev1", 00:19:48.393 "aliases": [ 00:19:48.393 "dd7854a6-8caf-46c0-9926-22b3492df199" 00:19:48.393 ], 00:19:48.393 "product_name": "Malloc disk", 00:19:48.393 "block_size": 512, 00:19:48.393 "num_blocks": 65536, 00:19:48.393 "uuid": "dd7854a6-8caf-46c0-9926-22b3492df199", 00:19:48.393 "assigned_rate_limits": { 00:19:48.393 "rw_ios_per_sec": 0, 00:19:48.393 "rw_mbytes_per_sec": 0, 00:19:48.393 "r_mbytes_per_sec": 0, 00:19:48.393 "w_mbytes_per_sec": 0 00:19:48.393 }, 00:19:48.393 "claimed": true, 00:19:48.393 "claim_type": "exclusive_write", 00:19:48.393 "zoned": false, 00:19:48.393 "supported_io_types": { 00:19:48.393 "read": true, 00:19:48.393 "write": true, 00:19:48.393 "unmap": true, 00:19:48.393 "flush": true, 00:19:48.393 "reset": true, 00:19:48.393 "nvme_admin": false, 00:19:48.393 "nvme_io": false, 00:19:48.393 "nvme_io_md": false, 00:19:48.393 "write_zeroes": true, 00:19:48.393 "zcopy": true, 00:19:48.393 "get_zone_info": false, 00:19:48.393 "zone_management": false, 00:19:48.393 "zone_append": false, 00:19:48.393 "compare": false, 00:19:48.393 "compare_and_write": false, 00:19:48.393 "abort": true, 00:19:48.393 "seek_hole": false, 00:19:48.393 "seek_data": false, 00:19:48.393 "copy": true, 00:19:48.393 "nvme_iov_md": false 00:19:48.393 }, 00:19:48.393 "memory_domains": [ 00:19:48.393 { 00:19:48.393 "dma_device_id": "system", 00:19:48.393 "dma_device_type": 1 00:19:48.393 }, 00:19:48.393 { 00:19:48.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.393 "dma_device_type": 2 00:19:48.393 } 00:19:48.393 ], 00:19:48.393 "driver_specific": {} 00:19:48.393 } 00:19:48.393 ] 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.393 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.652 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.652 "name": "Existed_Raid", 00:19:48.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.652 "strip_size_kb": 64, 00:19:48.652 "state": "configuring", 00:19:48.652 "raid_level": "raid0", 00:19:48.652 "superblock": false, 00:19:48.652 "num_base_bdevs": 4, 00:19:48.652 "num_base_bdevs_discovered": 1, 00:19:48.652 "num_base_bdevs_operational": 4, 00:19:48.652 "base_bdevs_list": [ 00:19:48.652 { 00:19:48.652 "name": "BaseBdev1", 00:19:48.652 "uuid": "dd7854a6-8caf-46c0-9926-22b3492df199", 00:19:48.652 "is_configured": true, 00:19:48.652 "data_offset": 0, 00:19:48.652 "data_size": 65536 00:19:48.652 }, 00:19:48.652 { 00:19:48.652 "name": "BaseBdev2", 00:19:48.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.652 "is_configured": false, 00:19:48.652 "data_offset": 0, 00:19:48.652 "data_size": 0 00:19:48.652 }, 00:19:48.652 { 00:19:48.652 "name": "BaseBdev3", 00:19:48.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.652 "is_configured": false, 00:19:48.652 "data_offset": 0, 00:19:48.652 "data_size": 0 00:19:48.652 }, 00:19:48.652 { 00:19:48.652 "name": "BaseBdev4", 00:19:48.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.652 "is_configured": false, 00:19:48.652 "data_offset": 0, 00:19:48.652 "data_size": 0 00:19:48.652 } 00:19:48.652 ] 00:19:48.652 }' 00:19:48.652 15:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.652 15:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.911 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:49.170 [2024-07-23 15:13:44.415213] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:49.170 [2024-07-23 15:13:44.415469] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:19:49.170 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:49.170 [2024-07-23 15:13:44.595330] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:49.170 [2024-07-23 15:13:44.597714] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:49.170 [2024-07-23 15:13:44.597909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:49.170 [2024-07-23 15:13:44.598015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:49.170 [2024-07-23 15:13:44.598059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:49.170 [2024-07-23 15:13:44.598130] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:49.170 [2024-07-23 15:13:44.598152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.430 "name": "Existed_Raid", 00:19:49.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.430 "strip_size_kb": 64, 00:19:49.430 "state": "configuring", 00:19:49.430 "raid_level": "raid0", 00:19:49.430 "superblock": false, 00:19:49.430 "num_base_bdevs": 4, 00:19:49.430 "num_base_bdevs_discovered": 1, 00:19:49.430 "num_base_bdevs_operational": 4, 00:19:49.430 "base_bdevs_list": [ 00:19:49.430 { 00:19:49.430 "name": "BaseBdev1", 00:19:49.430 "uuid": "dd7854a6-8caf-46c0-9926-22b3492df199", 00:19:49.430 "is_configured": true, 00:19:49.430 "data_offset": 0, 00:19:49.430 "data_size": 65536 00:19:49.430 }, 00:19:49.430 { 00:19:49.430 "name": "BaseBdev2", 00:19:49.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.430 "is_configured": false, 00:19:49.430 "data_offset": 0, 00:19:49.430 "data_size": 0 00:19:49.430 }, 00:19:49.430 { 00:19:49.430 "name": "BaseBdev3", 00:19:49.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.430 "is_configured": false, 00:19:49.430 "data_offset": 0, 00:19:49.430 "data_size": 0 00:19:49.430 }, 00:19:49.430 { 00:19:49.430 "name": "BaseBdev4", 00:19:49.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.430 "is_configured": false, 00:19:49.430 "data_offset": 0, 00:19:49.430 "data_size": 0 00:19:49.430 } 00:19:49.430 ] 00:19:49.430 }' 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.430 15:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.998 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:49.998 [2024-07-23 15:13:45.309719] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.998 BaseBdev2 00:19:49.998 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:49.998 15:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:49.998 15:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:49.998 15:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:49.998 15:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:49.998 15:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:49.998 15:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.258 15:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:50.516 [ 00:19:50.516 { 00:19:50.516 "name": "BaseBdev2", 00:19:50.516 "aliases": [ 00:19:50.516 "c0d710be-7e70-43dd-b60e-a18c18eb9f9e" 00:19:50.516 ], 00:19:50.516 "product_name": "Malloc disk", 00:19:50.516 "block_size": 512, 00:19:50.516 "num_blocks": 65536, 00:19:50.516 "uuid": "c0d710be-7e70-43dd-b60e-a18c18eb9f9e", 00:19:50.516 "assigned_rate_limits": { 00:19:50.516 "rw_ios_per_sec": 0, 00:19:50.516 "rw_mbytes_per_sec": 0, 00:19:50.516 "r_mbytes_per_sec": 0, 00:19:50.516 "w_mbytes_per_sec": 0 00:19:50.516 }, 00:19:50.516 "claimed": true, 00:19:50.516 "claim_type": "exclusive_write", 00:19:50.516 "zoned": false, 00:19:50.516 "supported_io_types": { 00:19:50.516 "read": true, 00:19:50.516 "write": true, 00:19:50.516 "unmap": true, 00:19:50.516 "flush": true, 00:19:50.516 "reset": true, 00:19:50.516 "nvme_admin": false, 00:19:50.516 "nvme_io": false, 00:19:50.516 "nvme_io_md": false, 00:19:50.516 "write_zeroes": true, 00:19:50.516 "zcopy": true, 00:19:50.516 "get_zone_info": false, 00:19:50.516 "zone_management": false, 00:19:50.516 "zone_append": false, 00:19:50.516 "compare": false, 00:19:50.516 "compare_and_write": false, 00:19:50.516 "abort": true, 00:19:50.516 "seek_hole": false, 00:19:50.516 "seek_data": false, 00:19:50.516 "copy": true, 00:19:50.516 "nvme_iov_md": false 00:19:50.516 }, 00:19:50.516 "memory_domains": [ 00:19:50.516 { 00:19:50.516 "dma_device_id": "system", 00:19:50.516 "dma_device_type": 1 00:19:50.516 }, 00:19:50.516 { 00:19:50.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.516 "dma_device_type": 2 00:19:50.516 } 00:19:50.516 ], 00:19:50.516 "driver_specific": {} 00:19:50.516 } 00:19:50.516 ] 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.516 15:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.775 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.775 "name": "Existed_Raid", 00:19:50.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.775 "strip_size_kb": 64, 00:19:50.775 "state": "configuring", 00:19:50.775 "raid_level": "raid0", 00:19:50.775 "superblock": false, 00:19:50.775 "num_base_bdevs": 4, 00:19:50.775 "num_base_bdevs_discovered": 2, 00:19:50.775 "num_base_bdevs_operational": 4, 00:19:50.775 "base_bdevs_list": [ 00:19:50.775 { 00:19:50.775 "name": "BaseBdev1", 00:19:50.775 "uuid": "dd7854a6-8caf-46c0-9926-22b3492df199", 00:19:50.775 "is_configured": true, 00:19:50.775 "data_offset": 0, 00:19:50.775 "data_size": 65536 00:19:50.775 }, 00:19:50.775 { 00:19:50.775 "name": "BaseBdev2", 00:19:50.775 "uuid": "c0d710be-7e70-43dd-b60e-a18c18eb9f9e", 00:19:50.775 "is_configured": true, 00:19:50.775 "data_offset": 0, 00:19:50.775 "data_size": 65536 00:19:50.775 }, 00:19:50.775 { 00:19:50.775 "name": "BaseBdev3", 00:19:50.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.775 "is_configured": false, 00:19:50.775 "data_offset": 0, 00:19:50.775 "data_size": 0 00:19:50.775 }, 00:19:50.775 { 00:19:50.775 "name": "BaseBdev4", 00:19:50.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.775 "is_configured": false, 00:19:50.775 "data_offset": 0, 00:19:50.775 "data_size": 0 00:19:50.775 } 00:19:50.775 ] 00:19:50.775 }' 00:19:50.775 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.775 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.036 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:51.296 [2024-07-23 15:13:46.521531] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:51.296 BaseBdev3 00:19:51.296 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:51.296 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:51.296 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:51.296 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:51.296 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:51.296 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:51.296 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:51.296 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:51.553 [ 00:19:51.553 { 00:19:51.553 "name": "BaseBdev3", 00:19:51.553 "aliases": [ 00:19:51.553 "3c416657-426d-45cd-85e7-8c2f3dfc7112" 00:19:51.553 ], 00:19:51.553 "product_name": "Malloc disk", 00:19:51.553 "block_size": 512, 00:19:51.553 "num_blocks": 65536, 00:19:51.553 "uuid": "3c416657-426d-45cd-85e7-8c2f3dfc7112", 00:19:51.554 "assigned_rate_limits": { 00:19:51.554 "rw_ios_per_sec": 0, 00:19:51.554 "rw_mbytes_per_sec": 0, 00:19:51.554 "r_mbytes_per_sec": 0, 00:19:51.554 "w_mbytes_per_sec": 0 00:19:51.554 }, 00:19:51.554 "claimed": true, 00:19:51.554 "claim_type": "exclusive_write", 00:19:51.554 "zoned": false, 00:19:51.554 "supported_io_types": { 00:19:51.554 "read": true, 00:19:51.554 "write": true, 00:19:51.554 "unmap": true, 00:19:51.554 "flush": true, 00:19:51.554 "reset": true, 00:19:51.554 "nvme_admin": false, 00:19:51.554 "nvme_io": false, 00:19:51.554 "nvme_io_md": false, 00:19:51.554 "write_zeroes": true, 00:19:51.554 "zcopy": true, 00:19:51.554 "get_zone_info": false, 00:19:51.554 "zone_management": false, 00:19:51.554 "zone_append": false, 00:19:51.554 "compare": false, 00:19:51.554 "compare_and_write": false, 00:19:51.554 "abort": true, 00:19:51.554 "seek_hole": false, 00:19:51.554 "seek_data": false, 00:19:51.554 "copy": true, 00:19:51.554 "nvme_iov_md": false 00:19:51.554 }, 00:19:51.554 "memory_domains": [ 00:19:51.554 { 00:19:51.554 "dma_device_id": "system", 00:19:51.554 "dma_device_type": 1 00:19:51.554 }, 00:19:51.554 { 00:19:51.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.554 "dma_device_type": 2 00:19:51.554 } 00:19:51.554 ], 00:19:51.554 "driver_specific": {} 00:19:51.554 } 00:19:51.554 ] 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.554 15:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.812 15:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.812 "name": "Existed_Raid", 00:19:51.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.812 "strip_size_kb": 64, 00:19:51.812 "state": "configuring", 00:19:51.812 "raid_level": "raid0", 00:19:51.812 "superblock": false, 00:19:51.812 "num_base_bdevs": 4, 00:19:51.812 "num_base_bdevs_discovered": 3, 00:19:51.812 "num_base_bdevs_operational": 4, 00:19:51.812 "base_bdevs_list": [ 00:19:51.812 { 00:19:51.812 "name": "BaseBdev1", 00:19:51.812 "uuid": "dd7854a6-8caf-46c0-9926-22b3492df199", 00:19:51.812 "is_configured": true, 00:19:51.812 "data_offset": 0, 00:19:51.812 "data_size": 65536 00:19:51.812 }, 00:19:51.812 { 00:19:51.812 "name": "BaseBdev2", 00:19:51.812 "uuid": "c0d710be-7e70-43dd-b60e-a18c18eb9f9e", 00:19:51.812 "is_configured": true, 00:19:51.812 "data_offset": 0, 00:19:51.812 "data_size": 65536 00:19:51.812 }, 00:19:51.812 { 00:19:51.812 "name": "BaseBdev3", 00:19:51.812 "uuid": "3c416657-426d-45cd-85e7-8c2f3dfc7112", 00:19:51.812 "is_configured": true, 00:19:51.812 "data_offset": 0, 00:19:51.812 "data_size": 65536 00:19:51.812 }, 00:19:51.812 { 00:19:51.812 "name": "BaseBdev4", 00:19:51.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.812 "is_configured": false, 00:19:51.812 "data_offset": 0, 00:19:51.812 "data_size": 0 00:19:51.812 } 00:19:51.812 ] 00:19:51.812 }' 00:19:51.812 15:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.812 15:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.071 15:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:52.330 [2024-07-23 15:13:47.553264] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:52.330 [2024-07-23 15:13:47.553474] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:19:52.330 [2024-07-23 15:13:47.553520] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:52.330 [2024-07-23 15:13:47.553725] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:19:52.330 [2024-07-23 15:13:47.554181] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:19:52.330 [2024-07-23 15:13:47.554349] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:19:52.330 [2024-07-23 15:13:47.554661] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.330 BaseBdev4 00:19:52.330 15:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:52.330 15:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:52.330 15:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:52.330 15:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:52.330 15:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:52.330 15:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:52.330 15:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:52.589 15:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:52.589 [ 00:19:52.589 { 00:19:52.589 "name": "BaseBdev4", 00:19:52.589 "aliases": [ 00:19:52.589 "74fcfe57-5111-4aa6-bad0-42a6f5e9a619" 00:19:52.589 ], 00:19:52.589 "product_name": "Malloc disk", 00:19:52.589 "block_size": 512, 00:19:52.589 "num_blocks": 65536, 00:19:52.589 "uuid": "74fcfe57-5111-4aa6-bad0-42a6f5e9a619", 00:19:52.589 "assigned_rate_limits": { 00:19:52.589 "rw_ios_per_sec": 0, 00:19:52.589 "rw_mbytes_per_sec": 0, 00:19:52.589 "r_mbytes_per_sec": 0, 00:19:52.589 "w_mbytes_per_sec": 0 00:19:52.589 }, 00:19:52.589 "claimed": true, 00:19:52.589 "claim_type": "exclusive_write", 00:19:52.589 "zoned": false, 00:19:52.589 "supported_io_types": { 00:19:52.589 "read": true, 00:19:52.589 "write": true, 00:19:52.589 "unmap": true, 00:19:52.589 "flush": true, 00:19:52.589 "reset": true, 00:19:52.589 "nvme_admin": false, 00:19:52.589 "nvme_io": false, 00:19:52.589 "nvme_io_md": false, 00:19:52.589 "write_zeroes": true, 00:19:52.590 "zcopy": true, 00:19:52.590 "get_zone_info": false, 00:19:52.590 "zone_management": false, 00:19:52.590 "zone_append": false, 00:19:52.590 "compare": false, 00:19:52.590 "compare_and_write": false, 00:19:52.590 "abort": true, 00:19:52.590 "seek_hole": false, 00:19:52.590 "seek_data": false, 00:19:52.590 "copy": true, 00:19:52.590 "nvme_iov_md": false 00:19:52.590 }, 00:19:52.590 "memory_domains": [ 00:19:52.590 { 00:19:52.590 "dma_device_id": "system", 00:19:52.590 "dma_device_type": 1 00:19:52.590 }, 00:19:52.590 { 00:19:52.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.590 "dma_device_type": 2 00:19:52.590 } 00:19:52.590 ], 00:19:52.590 "driver_specific": {} 00:19:52.590 } 00:19:52.590 ] 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.590 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.849 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.849 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.849 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.849 "name": "Existed_Raid", 00:19:52.849 "uuid": "d70e4be0-4dcd-4bc4-8cf0-5f0ea6afa74c", 00:19:52.849 "strip_size_kb": 64, 00:19:52.849 "state": "online", 00:19:52.849 "raid_level": "raid0", 00:19:52.849 "superblock": false, 00:19:52.849 "num_base_bdevs": 4, 00:19:52.849 "num_base_bdevs_discovered": 4, 00:19:52.849 "num_base_bdevs_operational": 4, 00:19:52.849 "base_bdevs_list": [ 00:19:52.849 { 00:19:52.849 "name": "BaseBdev1", 00:19:52.849 "uuid": "dd7854a6-8caf-46c0-9926-22b3492df199", 00:19:52.849 "is_configured": true, 00:19:52.849 "data_offset": 0, 00:19:52.849 "data_size": 65536 00:19:52.849 }, 00:19:52.849 { 00:19:52.849 "name": "BaseBdev2", 00:19:52.849 "uuid": "c0d710be-7e70-43dd-b60e-a18c18eb9f9e", 00:19:52.849 "is_configured": true, 00:19:52.849 "data_offset": 0, 00:19:52.849 "data_size": 65536 00:19:52.849 }, 00:19:52.849 { 00:19:52.849 "name": "BaseBdev3", 00:19:52.849 "uuid": "3c416657-426d-45cd-85e7-8c2f3dfc7112", 00:19:52.849 "is_configured": true, 00:19:52.849 "data_offset": 0, 00:19:52.849 "data_size": 65536 00:19:52.849 }, 00:19:52.849 { 00:19:52.849 "name": "BaseBdev4", 00:19:52.849 "uuid": "74fcfe57-5111-4aa6-bad0-42a6f5e9a619", 00:19:52.849 "is_configured": true, 00:19:52.849 "data_offset": 0, 00:19:52.849 "data_size": 65536 00:19:52.849 } 00:19:52.849 ] 00:19:52.849 }' 00:19:52.849 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.849 15:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.415 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:53.415 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:53.416 [2024-07-23 15:13:48.773986] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:53.416 "name": "Existed_Raid", 00:19:53.416 "aliases": [ 00:19:53.416 "d70e4be0-4dcd-4bc4-8cf0-5f0ea6afa74c" 00:19:53.416 ], 00:19:53.416 "product_name": "Raid Volume", 00:19:53.416 "block_size": 512, 00:19:53.416 "num_blocks": 262144, 00:19:53.416 "uuid": "d70e4be0-4dcd-4bc4-8cf0-5f0ea6afa74c", 00:19:53.416 "assigned_rate_limits": { 00:19:53.416 "rw_ios_per_sec": 0, 00:19:53.416 "rw_mbytes_per_sec": 0, 00:19:53.416 "r_mbytes_per_sec": 0, 00:19:53.416 "w_mbytes_per_sec": 0 00:19:53.416 }, 00:19:53.416 "claimed": false, 00:19:53.416 "zoned": false, 00:19:53.416 "supported_io_types": { 00:19:53.416 "read": true, 00:19:53.416 "write": true, 00:19:53.416 "unmap": true, 00:19:53.416 "flush": true, 00:19:53.416 "reset": true, 00:19:53.416 "nvme_admin": false, 00:19:53.416 "nvme_io": false, 00:19:53.416 "nvme_io_md": false, 00:19:53.416 "write_zeroes": true, 00:19:53.416 "zcopy": false, 00:19:53.416 "get_zone_info": false, 00:19:53.416 "zone_management": false, 00:19:53.416 "zone_append": false, 00:19:53.416 "compare": false, 00:19:53.416 "compare_and_write": false, 00:19:53.416 "abort": false, 00:19:53.416 "seek_hole": false, 00:19:53.416 "seek_data": false, 00:19:53.416 "copy": false, 00:19:53.416 "nvme_iov_md": false 00:19:53.416 }, 00:19:53.416 "memory_domains": [ 00:19:53.416 { 00:19:53.416 "dma_device_id": "system", 00:19:53.416 "dma_device_type": 1 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.416 "dma_device_type": 2 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "dma_device_id": "system", 00:19:53.416 "dma_device_type": 1 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.416 "dma_device_type": 2 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "dma_device_id": "system", 00:19:53.416 "dma_device_type": 1 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.416 "dma_device_type": 2 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "dma_device_id": "system", 00:19:53.416 "dma_device_type": 1 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.416 "dma_device_type": 2 00:19:53.416 } 00:19:53.416 ], 00:19:53.416 "driver_specific": { 00:19:53.416 "raid": { 00:19:53.416 "uuid": "d70e4be0-4dcd-4bc4-8cf0-5f0ea6afa74c", 00:19:53.416 "strip_size_kb": 64, 00:19:53.416 "state": "online", 00:19:53.416 "raid_level": "raid0", 00:19:53.416 "superblock": false, 00:19:53.416 "num_base_bdevs": 4, 00:19:53.416 "num_base_bdevs_discovered": 4, 00:19:53.416 "num_base_bdevs_operational": 4, 00:19:53.416 "base_bdevs_list": [ 00:19:53.416 { 00:19:53.416 "name": "BaseBdev1", 00:19:53.416 "uuid": "dd7854a6-8caf-46c0-9926-22b3492df199", 00:19:53.416 "is_configured": true, 00:19:53.416 "data_offset": 0, 00:19:53.416 "data_size": 65536 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "name": "BaseBdev2", 00:19:53.416 "uuid": "c0d710be-7e70-43dd-b60e-a18c18eb9f9e", 00:19:53.416 "is_configured": true, 00:19:53.416 "data_offset": 0, 00:19:53.416 "data_size": 65536 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "name": "BaseBdev3", 00:19:53.416 "uuid": "3c416657-426d-45cd-85e7-8c2f3dfc7112", 00:19:53.416 "is_configured": true, 00:19:53.416 "data_offset": 0, 00:19:53.416 "data_size": 65536 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "name": "BaseBdev4", 00:19:53.416 "uuid": "74fcfe57-5111-4aa6-bad0-42a6f5e9a619", 00:19:53.416 "is_configured": true, 00:19:53.416 "data_offset": 0, 00:19:53.416 "data_size": 65536 00:19:53.416 } 00:19:53.416 ] 00:19:53.416 } 00:19:53.416 } 00:19:53.416 }' 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:53.416 BaseBdev2 00:19:53.416 BaseBdev3 00:19:53.416 BaseBdev4' 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:53.416 15:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:53.674 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:53.674 "name": "BaseBdev1", 00:19:53.674 "aliases": [ 00:19:53.674 "dd7854a6-8caf-46c0-9926-22b3492df199" 00:19:53.674 ], 00:19:53.674 "product_name": "Malloc disk", 00:19:53.674 "block_size": 512, 00:19:53.674 "num_blocks": 65536, 00:19:53.674 "uuid": "dd7854a6-8caf-46c0-9926-22b3492df199", 00:19:53.674 "assigned_rate_limits": { 00:19:53.674 "rw_ios_per_sec": 0, 00:19:53.674 "rw_mbytes_per_sec": 0, 00:19:53.674 "r_mbytes_per_sec": 0, 00:19:53.674 "w_mbytes_per_sec": 0 00:19:53.674 }, 00:19:53.674 "claimed": true, 00:19:53.674 "claim_type": "exclusive_write", 00:19:53.674 "zoned": false, 00:19:53.674 "supported_io_types": { 00:19:53.674 "read": true, 00:19:53.674 "write": true, 00:19:53.674 "unmap": true, 00:19:53.674 "flush": true, 00:19:53.675 "reset": true, 00:19:53.675 "nvme_admin": false, 00:19:53.675 "nvme_io": false, 00:19:53.675 "nvme_io_md": false, 00:19:53.675 "write_zeroes": true, 00:19:53.675 "zcopy": true, 00:19:53.675 "get_zone_info": false, 00:19:53.675 "zone_management": false, 00:19:53.675 "zone_append": false, 00:19:53.675 "compare": false, 00:19:53.675 "compare_and_write": false, 00:19:53.675 "abort": true, 00:19:53.675 "seek_hole": false, 00:19:53.675 "seek_data": false, 00:19:53.675 "copy": true, 00:19:53.675 "nvme_iov_md": false 00:19:53.675 }, 00:19:53.675 "memory_domains": [ 00:19:53.675 { 00:19:53.675 "dma_device_id": "system", 00:19:53.675 "dma_device_type": 1 00:19:53.675 }, 00:19:53.675 { 00:19:53.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.675 "dma_device_type": 2 00:19:53.675 } 00:19:53.675 ], 00:19:53.675 "driver_specific": {} 00:19:53.675 }' 00:19:53.675 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.675 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.675 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:53.933 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:54.192 "name": "BaseBdev2", 00:19:54.192 "aliases": [ 00:19:54.192 "c0d710be-7e70-43dd-b60e-a18c18eb9f9e" 00:19:54.192 ], 00:19:54.192 "product_name": "Malloc disk", 00:19:54.192 "block_size": 512, 00:19:54.192 "num_blocks": 65536, 00:19:54.192 "uuid": "c0d710be-7e70-43dd-b60e-a18c18eb9f9e", 00:19:54.192 "assigned_rate_limits": { 00:19:54.192 "rw_ios_per_sec": 0, 00:19:54.192 "rw_mbytes_per_sec": 0, 00:19:54.192 "r_mbytes_per_sec": 0, 00:19:54.192 "w_mbytes_per_sec": 0 00:19:54.192 }, 00:19:54.192 "claimed": true, 00:19:54.192 "claim_type": "exclusive_write", 00:19:54.192 "zoned": false, 00:19:54.192 "supported_io_types": { 00:19:54.192 "read": true, 00:19:54.192 "write": true, 00:19:54.192 "unmap": true, 00:19:54.192 "flush": true, 00:19:54.192 "reset": true, 00:19:54.192 "nvme_admin": false, 00:19:54.192 "nvme_io": false, 00:19:54.192 "nvme_io_md": false, 00:19:54.192 "write_zeroes": true, 00:19:54.192 "zcopy": true, 00:19:54.192 "get_zone_info": false, 00:19:54.192 "zone_management": false, 00:19:54.192 "zone_append": false, 00:19:54.192 "compare": false, 00:19:54.192 "compare_and_write": false, 00:19:54.192 "abort": true, 00:19:54.192 "seek_hole": false, 00:19:54.192 "seek_data": false, 00:19:54.192 "copy": true, 00:19:54.192 "nvme_iov_md": false 00:19:54.192 }, 00:19:54.192 "memory_domains": [ 00:19:54.192 { 00:19:54.192 "dma_device_id": "system", 00:19:54.192 "dma_device_type": 1 00:19:54.192 }, 00:19:54.192 { 00:19:54.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.192 "dma_device_type": 2 00:19:54.192 } 00:19:54.192 ], 00:19:54.192 "driver_specific": {} 00:19:54.192 }' 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:54.192 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:54.451 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:54.451 "name": "BaseBdev3", 00:19:54.451 "aliases": [ 00:19:54.451 "3c416657-426d-45cd-85e7-8c2f3dfc7112" 00:19:54.451 ], 00:19:54.451 "product_name": "Malloc disk", 00:19:54.451 "block_size": 512, 00:19:54.451 "num_blocks": 65536, 00:19:54.451 "uuid": "3c416657-426d-45cd-85e7-8c2f3dfc7112", 00:19:54.451 "assigned_rate_limits": { 00:19:54.451 "rw_ios_per_sec": 0, 00:19:54.451 "rw_mbytes_per_sec": 0, 00:19:54.451 "r_mbytes_per_sec": 0, 00:19:54.451 "w_mbytes_per_sec": 0 00:19:54.451 }, 00:19:54.451 "claimed": true, 00:19:54.451 "claim_type": "exclusive_write", 00:19:54.451 "zoned": false, 00:19:54.451 "supported_io_types": { 00:19:54.451 "read": true, 00:19:54.451 "write": true, 00:19:54.451 "unmap": true, 00:19:54.451 "flush": true, 00:19:54.451 "reset": true, 00:19:54.451 "nvme_admin": false, 00:19:54.451 "nvme_io": false, 00:19:54.451 "nvme_io_md": false, 00:19:54.451 "write_zeroes": true, 00:19:54.451 "zcopy": true, 00:19:54.451 "get_zone_info": false, 00:19:54.451 "zone_management": false, 00:19:54.451 "zone_append": false, 00:19:54.451 "compare": false, 00:19:54.451 "compare_and_write": false, 00:19:54.451 "abort": true, 00:19:54.451 "seek_hole": false, 00:19:54.451 "seek_data": false, 00:19:54.451 "copy": true, 00:19:54.451 "nvme_iov_md": false 00:19:54.451 }, 00:19:54.451 "memory_domains": [ 00:19:54.451 { 00:19:54.451 "dma_device_id": "system", 00:19:54.451 "dma_device_type": 1 00:19:54.451 }, 00:19:54.451 { 00:19:54.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.451 "dma_device_type": 2 00:19:54.451 } 00:19:54.451 ], 00:19:54.451 "driver_specific": {} 00:19:54.451 }' 00:19:54.451 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.451 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.451 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:54.451 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:54.452 15:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:54.710 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:54.710 "name": "BaseBdev4", 00:19:54.710 "aliases": [ 00:19:54.710 "74fcfe57-5111-4aa6-bad0-42a6f5e9a619" 00:19:54.710 ], 00:19:54.710 "product_name": "Malloc disk", 00:19:54.710 "block_size": 512, 00:19:54.710 "num_blocks": 65536, 00:19:54.710 "uuid": "74fcfe57-5111-4aa6-bad0-42a6f5e9a619", 00:19:54.710 "assigned_rate_limits": { 00:19:54.710 "rw_ios_per_sec": 0, 00:19:54.710 "rw_mbytes_per_sec": 0, 00:19:54.710 "r_mbytes_per_sec": 0, 00:19:54.710 "w_mbytes_per_sec": 0 00:19:54.710 }, 00:19:54.710 "claimed": true, 00:19:54.710 "claim_type": "exclusive_write", 00:19:54.710 "zoned": false, 00:19:54.710 "supported_io_types": { 00:19:54.710 "read": true, 00:19:54.710 "write": true, 00:19:54.710 "unmap": true, 00:19:54.710 "flush": true, 00:19:54.710 "reset": true, 00:19:54.710 "nvme_admin": false, 00:19:54.710 "nvme_io": false, 00:19:54.710 "nvme_io_md": false, 00:19:54.710 "write_zeroes": true, 00:19:54.711 "zcopy": true, 00:19:54.711 "get_zone_info": false, 00:19:54.711 "zone_management": false, 00:19:54.711 "zone_append": false, 00:19:54.711 "compare": false, 00:19:54.711 "compare_and_write": false, 00:19:54.711 "abort": true, 00:19:54.711 "seek_hole": false, 00:19:54.711 "seek_data": false, 00:19:54.711 "copy": true, 00:19:54.711 "nvme_iov_md": false 00:19:54.711 }, 00:19:54.711 "memory_domains": [ 00:19:54.711 { 00:19:54.711 "dma_device_id": "system", 00:19:54.711 "dma_device_type": 1 00:19:54.711 }, 00:19:54.711 { 00:19:54.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.711 "dma_device_type": 2 00:19:54.711 } 00:19:54.711 ], 00:19:54.711 "driver_specific": {} 00:19:54.711 }' 00:19:54.711 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.711 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.711 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:54.711 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.969 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:55.228 [2024-07-23 15:13:50.442100] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:55.228 [2024-07-23 15:13:50.442157] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.228 [2024-07-23 15:13:50.442226] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.228 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.487 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.487 "name": "Existed_Raid", 00:19:55.487 "uuid": "d70e4be0-4dcd-4bc4-8cf0-5f0ea6afa74c", 00:19:55.487 "strip_size_kb": 64, 00:19:55.487 "state": "offline", 00:19:55.487 "raid_level": "raid0", 00:19:55.487 "superblock": false, 00:19:55.487 "num_base_bdevs": 4, 00:19:55.487 "num_base_bdevs_discovered": 3, 00:19:55.487 "num_base_bdevs_operational": 3, 00:19:55.487 "base_bdevs_list": [ 00:19:55.487 { 00:19:55.487 "name": null, 00:19:55.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.487 "is_configured": false, 00:19:55.487 "data_offset": 0, 00:19:55.487 "data_size": 65536 00:19:55.487 }, 00:19:55.487 { 00:19:55.487 "name": "BaseBdev2", 00:19:55.487 "uuid": "c0d710be-7e70-43dd-b60e-a18c18eb9f9e", 00:19:55.487 "is_configured": true, 00:19:55.487 "data_offset": 0, 00:19:55.487 "data_size": 65536 00:19:55.487 }, 00:19:55.487 { 00:19:55.487 "name": "BaseBdev3", 00:19:55.487 "uuid": "3c416657-426d-45cd-85e7-8c2f3dfc7112", 00:19:55.487 "is_configured": true, 00:19:55.487 "data_offset": 0, 00:19:55.487 "data_size": 65536 00:19:55.487 }, 00:19:55.487 { 00:19:55.487 "name": "BaseBdev4", 00:19:55.487 "uuid": "74fcfe57-5111-4aa6-bad0-42a6f5e9a619", 00:19:55.487 "is_configured": true, 00:19:55.487 "data_offset": 0, 00:19:55.487 "data_size": 65536 00:19:55.487 } 00:19:55.487 ] 00:19:55.487 }' 00:19:55.487 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.487 15:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.745 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:55.745 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:55.745 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.746 15:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:55.746 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:55.746 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:55.746 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:56.004 [2024-07-23 15:13:51.326814] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:56.004 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:56.004 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:56.004 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.004 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:56.263 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:56.263 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:56.263 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:56.522 [2024-07-23 15:13:51.843341] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:56.522 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:56.522 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:56.522 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.522 15:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:56.781 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:56.781 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:56.781 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:56.781 [2024-07-23 15:13:52.204116] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:56.781 [2024-07-23 15:13:52.204382] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:19:57.040 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:57.040 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:57.040 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.040 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:57.299 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:57.299 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:57.299 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:57.299 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:57.299 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:57.299 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:57.558 BaseBdev2 00:19:57.558 15:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:57.558 15:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:57.558 15:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:57.558 15:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:57.558 15:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:57.558 15:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:57.558 15:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.558 15:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:57.818 [ 00:19:57.818 { 00:19:57.818 "name": "BaseBdev2", 00:19:57.818 "aliases": [ 00:19:57.818 "c58ef647-c8af-46b0-b680-7e5656151878" 00:19:57.818 ], 00:19:57.818 "product_name": "Malloc disk", 00:19:57.818 "block_size": 512, 00:19:57.818 "num_blocks": 65536, 00:19:57.818 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:19:57.818 "assigned_rate_limits": { 00:19:57.818 "rw_ios_per_sec": 0, 00:19:57.818 "rw_mbytes_per_sec": 0, 00:19:57.818 "r_mbytes_per_sec": 0, 00:19:57.818 "w_mbytes_per_sec": 0 00:19:57.818 }, 00:19:57.818 "claimed": false, 00:19:57.818 "zoned": false, 00:19:57.819 "supported_io_types": { 00:19:57.819 "read": true, 00:19:57.819 "write": true, 00:19:57.819 "unmap": true, 00:19:57.819 "flush": true, 00:19:57.819 "reset": true, 00:19:57.819 "nvme_admin": false, 00:19:57.819 "nvme_io": false, 00:19:57.819 "nvme_io_md": false, 00:19:57.819 "write_zeroes": true, 00:19:57.819 "zcopy": true, 00:19:57.819 "get_zone_info": false, 00:19:57.819 "zone_management": false, 00:19:57.819 "zone_append": false, 00:19:57.819 "compare": false, 00:19:57.819 "compare_and_write": false, 00:19:57.819 "abort": true, 00:19:57.819 "seek_hole": false, 00:19:57.819 "seek_data": false, 00:19:57.819 "copy": true, 00:19:57.819 "nvme_iov_md": false 00:19:57.819 }, 00:19:57.819 "memory_domains": [ 00:19:57.819 { 00:19:57.819 "dma_device_id": "system", 00:19:57.819 "dma_device_type": 1 00:19:57.819 }, 00:19:57.819 { 00:19:57.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.819 "dma_device_type": 2 00:19:57.819 } 00:19:57.819 ], 00:19:57.819 "driver_specific": {} 00:19:57.819 } 00:19:57.819 ] 00:19:57.819 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:57.819 15:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:57.819 15:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:57.819 15:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:58.103 BaseBdev3 00:19:58.104 15:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:58.104 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:58.104 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:58.104 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:58.104 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:58.104 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:58.104 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.362 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:58.362 [ 00:19:58.362 { 00:19:58.362 "name": "BaseBdev3", 00:19:58.362 "aliases": [ 00:19:58.362 "f7ab731c-a1d2-4588-8477-4d9c01b21826" 00:19:58.362 ], 00:19:58.362 "product_name": "Malloc disk", 00:19:58.362 "block_size": 512, 00:19:58.362 "num_blocks": 65536, 00:19:58.362 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:19:58.362 "assigned_rate_limits": { 00:19:58.362 "rw_ios_per_sec": 0, 00:19:58.362 "rw_mbytes_per_sec": 0, 00:19:58.362 "r_mbytes_per_sec": 0, 00:19:58.362 "w_mbytes_per_sec": 0 00:19:58.362 }, 00:19:58.362 "claimed": false, 00:19:58.362 "zoned": false, 00:19:58.362 "supported_io_types": { 00:19:58.362 "read": true, 00:19:58.362 "write": true, 00:19:58.362 "unmap": true, 00:19:58.362 "flush": true, 00:19:58.362 "reset": true, 00:19:58.362 "nvme_admin": false, 00:19:58.362 "nvme_io": false, 00:19:58.362 "nvme_io_md": false, 00:19:58.362 "write_zeroes": true, 00:19:58.362 "zcopy": true, 00:19:58.362 "get_zone_info": false, 00:19:58.362 "zone_management": false, 00:19:58.362 "zone_append": false, 00:19:58.362 "compare": false, 00:19:58.362 "compare_and_write": false, 00:19:58.362 "abort": true, 00:19:58.362 "seek_hole": false, 00:19:58.362 "seek_data": false, 00:19:58.362 "copy": true, 00:19:58.362 "nvme_iov_md": false 00:19:58.362 }, 00:19:58.362 "memory_domains": [ 00:19:58.362 { 00:19:58.362 "dma_device_id": "system", 00:19:58.362 "dma_device_type": 1 00:19:58.362 }, 00:19:58.362 { 00:19:58.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.362 "dma_device_type": 2 00:19:58.362 } 00:19:58.362 ], 00:19:58.362 "driver_specific": {} 00:19:58.362 } 00:19:58.362 ] 00:19:58.362 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:58.362 15:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:58.362 15:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:58.362 15:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:58.620 BaseBdev4 00:19:58.620 15:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:58.620 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:19:58.620 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:58.620 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:58.620 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:58.620 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:58.620 15:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.879 15:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:59.137 [ 00:19:59.137 { 00:19:59.138 "name": "BaseBdev4", 00:19:59.138 "aliases": [ 00:19:59.138 "0174f45f-9470-4c60-9862-243bdec1791e" 00:19:59.138 ], 00:19:59.138 "product_name": "Malloc disk", 00:19:59.138 "block_size": 512, 00:19:59.138 "num_blocks": 65536, 00:19:59.138 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:19:59.138 "assigned_rate_limits": { 00:19:59.138 "rw_ios_per_sec": 0, 00:19:59.138 "rw_mbytes_per_sec": 0, 00:19:59.138 "r_mbytes_per_sec": 0, 00:19:59.138 "w_mbytes_per_sec": 0 00:19:59.138 }, 00:19:59.138 "claimed": false, 00:19:59.138 "zoned": false, 00:19:59.138 "supported_io_types": { 00:19:59.138 "read": true, 00:19:59.138 "write": true, 00:19:59.138 "unmap": true, 00:19:59.138 "flush": true, 00:19:59.138 "reset": true, 00:19:59.138 "nvme_admin": false, 00:19:59.138 "nvme_io": false, 00:19:59.138 "nvme_io_md": false, 00:19:59.138 "write_zeroes": true, 00:19:59.138 "zcopy": true, 00:19:59.138 "get_zone_info": false, 00:19:59.138 "zone_management": false, 00:19:59.138 "zone_append": false, 00:19:59.138 "compare": false, 00:19:59.138 "compare_and_write": false, 00:19:59.138 "abort": true, 00:19:59.138 "seek_hole": false, 00:19:59.138 "seek_data": false, 00:19:59.138 "copy": true, 00:19:59.138 "nvme_iov_md": false 00:19:59.138 }, 00:19:59.138 "memory_domains": [ 00:19:59.138 { 00:19:59.138 "dma_device_id": "system", 00:19:59.138 "dma_device_type": 1 00:19:59.138 }, 00:19:59.138 { 00:19:59.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.138 "dma_device_type": 2 00:19:59.138 } 00:19:59.138 ], 00:19:59.138 "driver_specific": {} 00:19:59.138 } 00:19:59.138 ] 00:19:59.138 15:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:59.138 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:59.138 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:59.138 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:59.396 [2024-07-23 15:13:54.654384] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:59.396 [2024-07-23 15:13:54.654590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:59.396 [2024-07-23 15:13:54.654642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.396 [2024-07-23 15:13:54.656756] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:59.396 [2024-07-23 15:13:54.656826] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.396 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.656 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.656 "name": "Existed_Raid", 00:19:59.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.656 "strip_size_kb": 64, 00:19:59.656 "state": "configuring", 00:19:59.656 "raid_level": "raid0", 00:19:59.656 "superblock": false, 00:19:59.656 "num_base_bdevs": 4, 00:19:59.656 "num_base_bdevs_discovered": 3, 00:19:59.656 "num_base_bdevs_operational": 4, 00:19:59.656 "base_bdevs_list": [ 00:19:59.656 { 00:19:59.656 "name": "BaseBdev1", 00:19:59.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.656 "is_configured": false, 00:19:59.656 "data_offset": 0, 00:19:59.656 "data_size": 0 00:19:59.656 }, 00:19:59.656 { 00:19:59.656 "name": "BaseBdev2", 00:19:59.656 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:19:59.656 "is_configured": true, 00:19:59.656 "data_offset": 0, 00:19:59.656 "data_size": 65536 00:19:59.656 }, 00:19:59.656 { 00:19:59.656 "name": "BaseBdev3", 00:19:59.656 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:19:59.656 "is_configured": true, 00:19:59.656 "data_offset": 0, 00:19:59.656 "data_size": 65536 00:19:59.656 }, 00:19:59.656 { 00:19:59.656 "name": "BaseBdev4", 00:19:59.656 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:19:59.656 "is_configured": true, 00:19:59.656 "data_offset": 0, 00:19:59.656 "data_size": 65536 00:19:59.656 } 00:19:59.656 ] 00:19:59.656 }' 00:19:59.656 15:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.656 15:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:59.915 [2024-07-23 15:13:55.270475] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.915 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.174 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.174 "name": "Existed_Raid", 00:20:00.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.174 "strip_size_kb": 64, 00:20:00.174 "state": "configuring", 00:20:00.174 "raid_level": "raid0", 00:20:00.174 "superblock": false, 00:20:00.174 "num_base_bdevs": 4, 00:20:00.174 "num_base_bdevs_discovered": 2, 00:20:00.174 "num_base_bdevs_operational": 4, 00:20:00.174 "base_bdevs_list": [ 00:20:00.174 { 00:20:00.174 "name": "BaseBdev1", 00:20:00.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.174 "is_configured": false, 00:20:00.174 "data_offset": 0, 00:20:00.174 "data_size": 0 00:20:00.174 }, 00:20:00.174 { 00:20:00.174 "name": null, 00:20:00.174 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:00.174 "is_configured": false, 00:20:00.174 "data_offset": 0, 00:20:00.174 "data_size": 65536 00:20:00.174 }, 00:20:00.174 { 00:20:00.174 "name": "BaseBdev3", 00:20:00.174 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:00.174 "is_configured": true, 00:20:00.174 "data_offset": 0, 00:20:00.174 "data_size": 65536 00:20:00.174 }, 00:20:00.174 { 00:20:00.174 "name": "BaseBdev4", 00:20:00.174 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:00.174 "is_configured": true, 00:20:00.174 "data_offset": 0, 00:20:00.174 "data_size": 65536 00:20:00.174 } 00:20:00.174 ] 00:20:00.174 }' 00:20:00.174 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.174 15:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.433 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.433 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:00.692 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:00.692 15:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:00.950 BaseBdev1 00:20:00.950 [2024-07-23 15:13:56.161921] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.950 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:00.950 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:00.950 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:00.950 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:00.950 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:00.950 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:00.950 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:01.209 [ 00:20:01.209 { 00:20:01.209 "name": "BaseBdev1", 00:20:01.209 "aliases": [ 00:20:01.209 "871c1c77-5c08-4ec9-91cb-5274313dbc51" 00:20:01.209 ], 00:20:01.209 "product_name": "Malloc disk", 00:20:01.209 "block_size": 512, 00:20:01.209 "num_blocks": 65536, 00:20:01.209 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:01.209 "assigned_rate_limits": { 00:20:01.209 "rw_ios_per_sec": 0, 00:20:01.209 "rw_mbytes_per_sec": 0, 00:20:01.209 "r_mbytes_per_sec": 0, 00:20:01.209 "w_mbytes_per_sec": 0 00:20:01.209 }, 00:20:01.209 "claimed": true, 00:20:01.209 "claim_type": "exclusive_write", 00:20:01.209 "zoned": false, 00:20:01.209 "supported_io_types": { 00:20:01.209 "read": true, 00:20:01.209 "write": true, 00:20:01.209 "unmap": true, 00:20:01.209 "flush": true, 00:20:01.209 "reset": true, 00:20:01.209 "nvme_admin": false, 00:20:01.209 "nvme_io": false, 00:20:01.209 "nvme_io_md": false, 00:20:01.209 "write_zeroes": true, 00:20:01.209 "zcopy": true, 00:20:01.209 "get_zone_info": false, 00:20:01.209 "zone_management": false, 00:20:01.209 "zone_append": false, 00:20:01.209 "compare": false, 00:20:01.209 "compare_and_write": false, 00:20:01.209 "abort": true, 00:20:01.209 "seek_hole": false, 00:20:01.209 "seek_data": false, 00:20:01.209 "copy": true, 00:20:01.209 "nvme_iov_md": false 00:20:01.209 }, 00:20:01.209 "memory_domains": [ 00:20:01.209 { 00:20:01.209 "dma_device_id": "system", 00:20:01.209 "dma_device_type": 1 00:20:01.209 }, 00:20:01.209 { 00:20:01.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.209 "dma_device_type": 2 00:20:01.209 } 00:20:01.209 ], 00:20:01.209 "driver_specific": {} 00:20:01.209 } 00:20:01.209 ] 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.209 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.467 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.467 "name": "Existed_Raid", 00:20:01.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.467 "strip_size_kb": 64, 00:20:01.467 "state": "configuring", 00:20:01.467 "raid_level": "raid0", 00:20:01.467 "superblock": false, 00:20:01.467 "num_base_bdevs": 4, 00:20:01.467 "num_base_bdevs_discovered": 3, 00:20:01.467 "num_base_bdevs_operational": 4, 00:20:01.467 "base_bdevs_list": [ 00:20:01.467 { 00:20:01.467 "name": "BaseBdev1", 00:20:01.467 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:01.467 "is_configured": true, 00:20:01.467 "data_offset": 0, 00:20:01.467 "data_size": 65536 00:20:01.467 }, 00:20:01.467 { 00:20:01.467 "name": null, 00:20:01.467 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:01.467 "is_configured": false, 00:20:01.467 "data_offset": 0, 00:20:01.467 "data_size": 65536 00:20:01.467 }, 00:20:01.467 { 00:20:01.467 "name": "BaseBdev3", 00:20:01.467 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:01.467 "is_configured": true, 00:20:01.467 "data_offset": 0, 00:20:01.467 "data_size": 65536 00:20:01.467 }, 00:20:01.467 { 00:20:01.467 "name": "BaseBdev4", 00:20:01.467 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:01.467 "is_configured": true, 00:20:01.467 "data_offset": 0, 00:20:01.467 "data_size": 65536 00:20:01.467 } 00:20:01.467 ] 00:20:01.467 }' 00:20:01.467 15:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.467 15:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.726 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.726 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:01.985 [2024-07-23 15:13:57.374275] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.985 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.244 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.244 "name": "Existed_Raid", 00:20:02.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.244 "strip_size_kb": 64, 00:20:02.244 "state": "configuring", 00:20:02.244 "raid_level": "raid0", 00:20:02.244 "superblock": false, 00:20:02.244 "num_base_bdevs": 4, 00:20:02.244 "num_base_bdevs_discovered": 2, 00:20:02.244 "num_base_bdevs_operational": 4, 00:20:02.244 "base_bdevs_list": [ 00:20:02.244 { 00:20:02.244 "name": "BaseBdev1", 00:20:02.244 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:02.244 "is_configured": true, 00:20:02.244 "data_offset": 0, 00:20:02.244 "data_size": 65536 00:20:02.244 }, 00:20:02.244 { 00:20:02.244 "name": null, 00:20:02.244 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:02.244 "is_configured": false, 00:20:02.244 "data_offset": 0, 00:20:02.244 "data_size": 65536 00:20:02.244 }, 00:20:02.244 { 00:20:02.244 "name": null, 00:20:02.244 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:02.244 "is_configured": false, 00:20:02.244 "data_offset": 0, 00:20:02.244 "data_size": 65536 00:20:02.244 }, 00:20:02.244 { 00:20:02.244 "name": "BaseBdev4", 00:20:02.244 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:02.244 "is_configured": true, 00:20:02.244 "data_offset": 0, 00:20:02.244 "data_size": 65536 00:20:02.244 } 00:20:02.244 ] 00:20:02.244 }' 00:20:02.244 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.244 15:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.503 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.503 15:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:02.762 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:02.763 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:03.022 [2024-07-23 15:13:58.294472] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.022 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.281 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.281 "name": "Existed_Raid", 00:20:03.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.281 "strip_size_kb": 64, 00:20:03.281 "state": "configuring", 00:20:03.281 "raid_level": "raid0", 00:20:03.281 "superblock": false, 00:20:03.281 "num_base_bdevs": 4, 00:20:03.281 "num_base_bdevs_discovered": 3, 00:20:03.281 "num_base_bdevs_operational": 4, 00:20:03.281 "base_bdevs_list": [ 00:20:03.281 { 00:20:03.281 "name": "BaseBdev1", 00:20:03.281 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:03.281 "is_configured": true, 00:20:03.281 "data_offset": 0, 00:20:03.281 "data_size": 65536 00:20:03.281 }, 00:20:03.281 { 00:20:03.281 "name": null, 00:20:03.281 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:03.281 "is_configured": false, 00:20:03.281 "data_offset": 0, 00:20:03.281 "data_size": 65536 00:20:03.281 }, 00:20:03.281 { 00:20:03.281 "name": "BaseBdev3", 00:20:03.281 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:03.281 "is_configured": true, 00:20:03.281 "data_offset": 0, 00:20:03.281 "data_size": 65536 00:20:03.281 }, 00:20:03.281 { 00:20:03.281 "name": "BaseBdev4", 00:20:03.281 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:03.281 "is_configured": true, 00:20:03.281 "data_offset": 0, 00:20:03.281 "data_size": 65536 00:20:03.281 } 00:20:03.281 ] 00:20:03.281 }' 00:20:03.281 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.281 15:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.540 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:03.540 15:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.799 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:03.799 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:04.058 [2024-07-23 15:13:59.378772] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.058 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:04.058 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:04.058 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:04.058 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:04.058 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:04.059 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:04.059 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.059 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.059 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.059 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.059 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.059 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.317 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.317 "name": "Existed_Raid", 00:20:04.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.317 "strip_size_kb": 64, 00:20:04.317 "state": "configuring", 00:20:04.317 "raid_level": "raid0", 00:20:04.317 "superblock": false, 00:20:04.318 "num_base_bdevs": 4, 00:20:04.318 "num_base_bdevs_discovered": 2, 00:20:04.318 "num_base_bdevs_operational": 4, 00:20:04.318 "base_bdevs_list": [ 00:20:04.318 { 00:20:04.318 "name": null, 00:20:04.318 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:04.318 "is_configured": false, 00:20:04.318 "data_offset": 0, 00:20:04.318 "data_size": 65536 00:20:04.318 }, 00:20:04.318 { 00:20:04.318 "name": null, 00:20:04.318 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:04.318 "is_configured": false, 00:20:04.318 "data_offset": 0, 00:20:04.318 "data_size": 65536 00:20:04.318 }, 00:20:04.318 { 00:20:04.318 "name": "BaseBdev3", 00:20:04.318 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:04.318 "is_configured": true, 00:20:04.318 "data_offset": 0, 00:20:04.318 "data_size": 65536 00:20:04.318 }, 00:20:04.318 { 00:20:04.318 "name": "BaseBdev4", 00:20:04.318 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:04.318 "is_configured": true, 00:20:04.318 "data_offset": 0, 00:20:04.318 "data_size": 65536 00:20:04.318 } 00:20:04.318 ] 00:20:04.318 }' 00:20:04.318 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.318 15:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.577 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.577 15:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:04.835 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:04.835 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:05.094 [2024-07-23 15:14:00.327256] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.094 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.353 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.353 "name": "Existed_Raid", 00:20:05.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.353 "strip_size_kb": 64, 00:20:05.353 "state": "configuring", 00:20:05.353 "raid_level": "raid0", 00:20:05.353 "superblock": false, 00:20:05.353 "num_base_bdevs": 4, 00:20:05.353 "num_base_bdevs_discovered": 3, 00:20:05.353 "num_base_bdevs_operational": 4, 00:20:05.353 "base_bdevs_list": [ 00:20:05.353 { 00:20:05.353 "name": null, 00:20:05.353 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:05.353 "is_configured": false, 00:20:05.353 "data_offset": 0, 00:20:05.353 "data_size": 65536 00:20:05.353 }, 00:20:05.353 { 00:20:05.353 "name": "BaseBdev2", 00:20:05.353 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:05.353 "is_configured": true, 00:20:05.353 "data_offset": 0, 00:20:05.353 "data_size": 65536 00:20:05.353 }, 00:20:05.353 { 00:20:05.353 "name": "BaseBdev3", 00:20:05.353 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:05.353 "is_configured": true, 00:20:05.353 "data_offset": 0, 00:20:05.353 "data_size": 65536 00:20:05.353 }, 00:20:05.353 { 00:20:05.353 "name": "BaseBdev4", 00:20:05.353 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:05.353 "is_configured": true, 00:20:05.353 "data_offset": 0, 00:20:05.353 "data_size": 65536 00:20:05.353 } 00:20:05.353 ] 00:20:05.353 }' 00:20:05.353 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.353 15:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.612 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.612 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:05.612 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:05.612 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.612 15:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:05.871 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 871c1c77-5c08-4ec9-91cb-5274313dbc51 00:20:06.130 [2024-07-23 15:14:01.386713] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:06.130 [2024-07-23 15:14:01.386767] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:20:06.130 [2024-07-23 15:14:01.386776] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:06.130 [2024-07-23 15:14:01.386894] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:20:06.130 [2024-07-23 15:14:01.387174] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:20:06.130 [2024-07-23 15:14:01.387190] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008180 00:20:06.130 [2024-07-23 15:14:01.387367] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.130 NewBaseBdev 00:20:06.130 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:06.130 15:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:06.130 15:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:06.130 15:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:06.130 15:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:06.130 15:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:06.130 15:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.389 15:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:06.389 [ 00:20:06.389 { 00:20:06.389 "name": "NewBaseBdev", 00:20:06.389 "aliases": [ 00:20:06.389 "871c1c77-5c08-4ec9-91cb-5274313dbc51" 00:20:06.389 ], 00:20:06.389 "product_name": "Malloc disk", 00:20:06.389 "block_size": 512, 00:20:06.389 "num_blocks": 65536, 00:20:06.389 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:06.389 "assigned_rate_limits": { 00:20:06.389 "rw_ios_per_sec": 0, 00:20:06.389 "rw_mbytes_per_sec": 0, 00:20:06.389 "r_mbytes_per_sec": 0, 00:20:06.389 "w_mbytes_per_sec": 0 00:20:06.389 }, 00:20:06.389 "claimed": true, 00:20:06.389 "claim_type": "exclusive_write", 00:20:06.389 "zoned": false, 00:20:06.389 "supported_io_types": { 00:20:06.389 "read": true, 00:20:06.389 "write": true, 00:20:06.389 "unmap": true, 00:20:06.389 "flush": true, 00:20:06.389 "reset": true, 00:20:06.389 "nvme_admin": false, 00:20:06.389 "nvme_io": false, 00:20:06.389 "nvme_io_md": false, 00:20:06.389 "write_zeroes": true, 00:20:06.389 "zcopy": true, 00:20:06.389 "get_zone_info": false, 00:20:06.389 "zone_management": false, 00:20:06.389 "zone_append": false, 00:20:06.389 "compare": false, 00:20:06.389 "compare_and_write": false, 00:20:06.389 "abort": true, 00:20:06.389 "seek_hole": false, 00:20:06.389 "seek_data": false, 00:20:06.389 "copy": true, 00:20:06.389 "nvme_iov_md": false 00:20:06.389 }, 00:20:06.389 "memory_domains": [ 00:20:06.389 { 00:20:06.389 "dma_device_id": "system", 00:20:06.389 "dma_device_type": 1 00:20:06.389 }, 00:20:06.389 { 00:20:06.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.389 "dma_device_type": 2 00:20:06.389 } 00:20:06.389 ], 00:20:06.389 "driver_specific": {} 00:20:06.389 } 00:20:06.389 ] 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.648 "name": "Existed_Raid", 00:20:06.648 "uuid": "9ce3cc02-8a8b-481d-bbe4-a9c96951da12", 00:20:06.648 "strip_size_kb": 64, 00:20:06.648 "state": "online", 00:20:06.648 "raid_level": "raid0", 00:20:06.648 "superblock": false, 00:20:06.648 "num_base_bdevs": 4, 00:20:06.648 "num_base_bdevs_discovered": 4, 00:20:06.648 "num_base_bdevs_operational": 4, 00:20:06.648 "base_bdevs_list": [ 00:20:06.648 { 00:20:06.648 "name": "NewBaseBdev", 00:20:06.648 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:06.648 "is_configured": true, 00:20:06.648 "data_offset": 0, 00:20:06.648 "data_size": 65536 00:20:06.648 }, 00:20:06.648 { 00:20:06.648 "name": "BaseBdev2", 00:20:06.648 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:06.648 "is_configured": true, 00:20:06.648 "data_offset": 0, 00:20:06.648 "data_size": 65536 00:20:06.648 }, 00:20:06.648 { 00:20:06.648 "name": "BaseBdev3", 00:20:06.648 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:06.648 "is_configured": true, 00:20:06.648 "data_offset": 0, 00:20:06.648 "data_size": 65536 00:20:06.648 }, 00:20:06.648 { 00:20:06.648 "name": "BaseBdev4", 00:20:06.648 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:06.648 "is_configured": true, 00:20:06.648 "data_offset": 0, 00:20:06.648 "data_size": 65536 00:20:06.648 } 00:20:06.648 ] 00:20:06.648 }' 00:20:06.648 15:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.648 15:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.909 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:06.909 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:06.909 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:06.909 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:06.909 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:06.909 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:06.909 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:06.909 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:07.169 [2024-07-23 15:14:02.431398] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.169 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:07.169 "name": "Existed_Raid", 00:20:07.169 "aliases": [ 00:20:07.169 "9ce3cc02-8a8b-481d-bbe4-a9c96951da12" 00:20:07.169 ], 00:20:07.169 "product_name": "Raid Volume", 00:20:07.169 "block_size": 512, 00:20:07.169 "num_blocks": 262144, 00:20:07.169 "uuid": "9ce3cc02-8a8b-481d-bbe4-a9c96951da12", 00:20:07.169 "assigned_rate_limits": { 00:20:07.169 "rw_ios_per_sec": 0, 00:20:07.169 "rw_mbytes_per_sec": 0, 00:20:07.169 "r_mbytes_per_sec": 0, 00:20:07.169 "w_mbytes_per_sec": 0 00:20:07.169 }, 00:20:07.169 "claimed": false, 00:20:07.169 "zoned": false, 00:20:07.169 "supported_io_types": { 00:20:07.169 "read": true, 00:20:07.169 "write": true, 00:20:07.169 "unmap": true, 00:20:07.169 "flush": true, 00:20:07.169 "reset": true, 00:20:07.169 "nvme_admin": false, 00:20:07.169 "nvme_io": false, 00:20:07.169 "nvme_io_md": false, 00:20:07.169 "write_zeroes": true, 00:20:07.169 "zcopy": false, 00:20:07.169 "get_zone_info": false, 00:20:07.169 "zone_management": false, 00:20:07.169 "zone_append": false, 00:20:07.169 "compare": false, 00:20:07.169 "compare_and_write": false, 00:20:07.169 "abort": false, 00:20:07.169 "seek_hole": false, 00:20:07.169 "seek_data": false, 00:20:07.169 "copy": false, 00:20:07.169 "nvme_iov_md": false 00:20:07.169 }, 00:20:07.169 "memory_domains": [ 00:20:07.169 { 00:20:07.169 "dma_device_id": "system", 00:20:07.169 "dma_device_type": 1 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.169 "dma_device_type": 2 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "dma_device_id": "system", 00:20:07.169 "dma_device_type": 1 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.169 "dma_device_type": 2 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "dma_device_id": "system", 00:20:07.169 "dma_device_type": 1 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.169 "dma_device_type": 2 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "dma_device_id": "system", 00:20:07.169 "dma_device_type": 1 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.169 "dma_device_type": 2 00:20:07.169 } 00:20:07.169 ], 00:20:07.169 "driver_specific": { 00:20:07.169 "raid": { 00:20:07.169 "uuid": "9ce3cc02-8a8b-481d-bbe4-a9c96951da12", 00:20:07.169 "strip_size_kb": 64, 00:20:07.169 "state": "online", 00:20:07.169 "raid_level": "raid0", 00:20:07.169 "superblock": false, 00:20:07.169 "num_base_bdevs": 4, 00:20:07.169 "num_base_bdevs_discovered": 4, 00:20:07.169 "num_base_bdevs_operational": 4, 00:20:07.169 "base_bdevs_list": [ 00:20:07.169 { 00:20:07.169 "name": "NewBaseBdev", 00:20:07.169 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:07.169 "is_configured": true, 00:20:07.169 "data_offset": 0, 00:20:07.169 "data_size": 65536 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "name": "BaseBdev2", 00:20:07.169 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:07.169 "is_configured": true, 00:20:07.169 "data_offset": 0, 00:20:07.169 "data_size": 65536 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "name": "BaseBdev3", 00:20:07.169 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:07.169 "is_configured": true, 00:20:07.169 "data_offset": 0, 00:20:07.169 "data_size": 65536 00:20:07.169 }, 00:20:07.169 { 00:20:07.169 "name": "BaseBdev4", 00:20:07.169 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:07.169 "is_configured": true, 00:20:07.169 "data_offset": 0, 00:20:07.169 "data_size": 65536 00:20:07.169 } 00:20:07.169 ] 00:20:07.169 } 00:20:07.169 } 00:20:07.169 }' 00:20:07.169 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.169 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:07.169 BaseBdev2 00:20:07.169 BaseBdev3 00:20:07.169 BaseBdev4' 00:20:07.169 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:07.169 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:07.169 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:07.428 "name": "NewBaseBdev", 00:20:07.428 "aliases": [ 00:20:07.428 "871c1c77-5c08-4ec9-91cb-5274313dbc51" 00:20:07.428 ], 00:20:07.428 "product_name": "Malloc disk", 00:20:07.428 "block_size": 512, 00:20:07.428 "num_blocks": 65536, 00:20:07.428 "uuid": "871c1c77-5c08-4ec9-91cb-5274313dbc51", 00:20:07.428 "assigned_rate_limits": { 00:20:07.428 "rw_ios_per_sec": 0, 00:20:07.428 "rw_mbytes_per_sec": 0, 00:20:07.428 "r_mbytes_per_sec": 0, 00:20:07.428 "w_mbytes_per_sec": 0 00:20:07.428 }, 00:20:07.428 "claimed": true, 00:20:07.428 "claim_type": "exclusive_write", 00:20:07.428 "zoned": false, 00:20:07.428 "supported_io_types": { 00:20:07.428 "read": true, 00:20:07.428 "write": true, 00:20:07.428 "unmap": true, 00:20:07.428 "flush": true, 00:20:07.428 "reset": true, 00:20:07.428 "nvme_admin": false, 00:20:07.428 "nvme_io": false, 00:20:07.428 "nvme_io_md": false, 00:20:07.428 "write_zeroes": true, 00:20:07.428 "zcopy": true, 00:20:07.428 "get_zone_info": false, 00:20:07.428 "zone_management": false, 00:20:07.428 "zone_append": false, 00:20:07.428 "compare": false, 00:20:07.428 "compare_and_write": false, 00:20:07.428 "abort": true, 00:20:07.428 "seek_hole": false, 00:20:07.428 "seek_data": false, 00:20:07.428 "copy": true, 00:20:07.428 "nvme_iov_md": false 00:20:07.428 }, 00:20:07.428 "memory_domains": [ 00:20:07.428 { 00:20:07.428 "dma_device_id": "system", 00:20:07.428 "dma_device_type": 1 00:20:07.428 }, 00:20:07.428 { 00:20:07.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.428 "dma_device_type": 2 00:20:07.428 } 00:20:07.428 ], 00:20:07.428 "driver_specific": {} 00:20:07.428 }' 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:07.428 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:07.429 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:07.429 15:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:07.688 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:07.688 "name": "BaseBdev2", 00:20:07.688 "aliases": [ 00:20:07.688 "c58ef647-c8af-46b0-b680-7e5656151878" 00:20:07.688 ], 00:20:07.688 "product_name": "Malloc disk", 00:20:07.688 "block_size": 512, 00:20:07.688 "num_blocks": 65536, 00:20:07.688 "uuid": "c58ef647-c8af-46b0-b680-7e5656151878", 00:20:07.688 "assigned_rate_limits": { 00:20:07.688 "rw_ios_per_sec": 0, 00:20:07.688 "rw_mbytes_per_sec": 0, 00:20:07.688 "r_mbytes_per_sec": 0, 00:20:07.688 "w_mbytes_per_sec": 0 00:20:07.688 }, 00:20:07.688 "claimed": true, 00:20:07.688 "claim_type": "exclusive_write", 00:20:07.688 "zoned": false, 00:20:07.688 "supported_io_types": { 00:20:07.688 "read": true, 00:20:07.688 "write": true, 00:20:07.688 "unmap": true, 00:20:07.688 "flush": true, 00:20:07.688 "reset": true, 00:20:07.688 "nvme_admin": false, 00:20:07.688 "nvme_io": false, 00:20:07.688 "nvme_io_md": false, 00:20:07.688 "write_zeroes": true, 00:20:07.688 "zcopy": true, 00:20:07.688 "get_zone_info": false, 00:20:07.688 "zone_management": false, 00:20:07.688 "zone_append": false, 00:20:07.688 "compare": false, 00:20:07.688 "compare_and_write": false, 00:20:07.688 "abort": true, 00:20:07.688 "seek_hole": false, 00:20:07.688 "seek_data": false, 00:20:07.688 "copy": true, 00:20:07.688 "nvme_iov_md": false 00:20:07.688 }, 00:20:07.688 "memory_domains": [ 00:20:07.688 { 00:20:07.688 "dma_device_id": "system", 00:20:07.688 "dma_device_type": 1 00:20:07.688 }, 00:20:07.688 { 00:20:07.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.688 "dma_device_type": 2 00:20:07.688 } 00:20:07.688 ], 00:20:07.688 "driver_specific": {} 00:20:07.688 }' 00:20:07.688 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:07.688 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:07.947 "name": "BaseBdev3", 00:20:07.947 "aliases": [ 00:20:07.947 "f7ab731c-a1d2-4588-8477-4d9c01b21826" 00:20:07.947 ], 00:20:07.947 "product_name": "Malloc disk", 00:20:07.947 "block_size": 512, 00:20:07.947 "num_blocks": 65536, 00:20:07.947 "uuid": "f7ab731c-a1d2-4588-8477-4d9c01b21826", 00:20:07.947 "assigned_rate_limits": { 00:20:07.947 "rw_ios_per_sec": 0, 00:20:07.947 "rw_mbytes_per_sec": 0, 00:20:07.947 "r_mbytes_per_sec": 0, 00:20:07.947 "w_mbytes_per_sec": 0 00:20:07.947 }, 00:20:07.947 "claimed": true, 00:20:07.947 "claim_type": "exclusive_write", 00:20:07.947 "zoned": false, 00:20:07.947 "supported_io_types": { 00:20:07.947 "read": true, 00:20:07.947 "write": true, 00:20:07.947 "unmap": true, 00:20:07.947 "flush": true, 00:20:07.947 "reset": true, 00:20:07.947 "nvme_admin": false, 00:20:07.947 "nvme_io": false, 00:20:07.947 "nvme_io_md": false, 00:20:07.947 "write_zeroes": true, 00:20:07.947 "zcopy": true, 00:20:07.947 "get_zone_info": false, 00:20:07.947 "zone_management": false, 00:20:07.947 "zone_append": false, 00:20:07.947 "compare": false, 00:20:07.947 "compare_and_write": false, 00:20:07.947 "abort": true, 00:20:07.947 "seek_hole": false, 00:20:07.947 "seek_data": false, 00:20:07.947 "copy": true, 00:20:07.947 "nvme_iov_md": false 00:20:07.947 }, 00:20:07.947 "memory_domains": [ 00:20:07.947 { 00:20:07.947 "dma_device_id": "system", 00:20:07.947 "dma_device_type": 1 00:20:07.947 }, 00:20:07.947 { 00:20:07.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.947 "dma_device_type": 2 00:20:07.947 } 00:20:07.947 ], 00:20:07.947 "driver_specific": {} 00:20:07.947 }' 00:20:07.947 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:08.206 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:08.465 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:08.465 "name": "BaseBdev4", 00:20:08.465 "aliases": [ 00:20:08.465 "0174f45f-9470-4c60-9862-243bdec1791e" 00:20:08.465 ], 00:20:08.465 "product_name": "Malloc disk", 00:20:08.465 "block_size": 512, 00:20:08.465 "num_blocks": 65536, 00:20:08.465 "uuid": "0174f45f-9470-4c60-9862-243bdec1791e", 00:20:08.465 "assigned_rate_limits": { 00:20:08.465 "rw_ios_per_sec": 0, 00:20:08.465 "rw_mbytes_per_sec": 0, 00:20:08.465 "r_mbytes_per_sec": 0, 00:20:08.466 "w_mbytes_per_sec": 0 00:20:08.466 }, 00:20:08.466 "claimed": true, 00:20:08.466 "claim_type": "exclusive_write", 00:20:08.466 "zoned": false, 00:20:08.466 "supported_io_types": { 00:20:08.466 "read": true, 00:20:08.466 "write": true, 00:20:08.466 "unmap": true, 00:20:08.466 "flush": true, 00:20:08.466 "reset": true, 00:20:08.466 "nvme_admin": false, 00:20:08.466 "nvme_io": false, 00:20:08.466 "nvme_io_md": false, 00:20:08.466 "write_zeroes": true, 00:20:08.466 "zcopy": true, 00:20:08.466 "get_zone_info": false, 00:20:08.466 "zone_management": false, 00:20:08.466 "zone_append": false, 00:20:08.466 "compare": false, 00:20:08.466 "compare_and_write": false, 00:20:08.466 "abort": true, 00:20:08.466 "seek_hole": false, 00:20:08.466 "seek_data": false, 00:20:08.466 "copy": true, 00:20:08.466 "nvme_iov_md": false 00:20:08.466 }, 00:20:08.466 "memory_domains": [ 00:20:08.466 { 00:20:08.466 "dma_device_id": "system", 00:20:08.466 "dma_device_type": 1 00:20:08.466 }, 00:20:08.466 { 00:20:08.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.466 "dma_device_type": 2 00:20:08.466 } 00:20:08.466 ], 00:20:08.466 "driver_specific": {} 00:20:08.466 }' 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:08.466 15:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:08.725 [2024-07-23 15:14:04.083406] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:08.725 [2024-07-23 15:14:04.083455] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.725 [2024-07-23 15:14:04.083536] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.725 [2024-07-23 15:14:04.083604] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.725 [2024-07-23 15:14:04.083617] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name Existed_Raid, state offline 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 98873 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 98873 ']' 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 98873 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98873 00:20:08.725 killing process with pid 98873 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98873' 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 98873 00:20:08.725 [2024-07-23 15:14:04.141775] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:08.725 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 98873 00:20:08.983 [2024-07-23 15:14:04.189142] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.242 ************************************ 00:20:09.242 END TEST raid_state_function_test 00:20:09.242 ************************************ 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:09.242 00:20:09.242 real 0m23.008s 00:20:09.242 user 0m40.488s 00:20:09.242 sys 0m5.088s 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.242 15:14:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:09.242 15:14:04 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:20:09.242 15:14:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:09.242 15:14:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.242 15:14:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:09.242 ************************************ 00:20:09.242 START TEST raid_state_function_test_sb 00:20:09.242 ************************************ 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=99820 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 99820' 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:09.242 Process raid pid: 99820 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 99820 /var/tmp/spdk-raid.sock 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 99820 ']' 00:20:09.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.242 15:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.242 [2024-07-23 15:14:04.586952] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:20:09.242 [2024-07-23 15:14:04.587139] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.502 [2024-07-23 15:14:04.739335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.502 [2024-07-23 15:14:04.795641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.502 [2024-07-23 15:14:04.842701] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.438 15:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.438 15:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:10.439 [2024-07-23 15:14:05.700783] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.439 [2024-07-23 15:14:05.700864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.439 [2024-07-23 15:14:05.700876] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.439 [2024-07-23 15:14:05.700890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.439 [2024-07-23 15:14:05.700901] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.439 [2024-07-23 15:14:05.700916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.439 [2024-07-23 15:14:05.700924] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:10.439 [2024-07-23 15:14:05.700940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.439 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.698 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.698 "name": "Existed_Raid", 00:20:10.698 "uuid": "9a910f6c-0c7b-4b9d-9792-78b9db0292ea", 00:20:10.698 "strip_size_kb": 64, 00:20:10.698 "state": "configuring", 00:20:10.698 "raid_level": "raid0", 00:20:10.698 "superblock": true, 00:20:10.698 "num_base_bdevs": 4, 00:20:10.698 "num_base_bdevs_discovered": 0, 00:20:10.698 "num_base_bdevs_operational": 4, 00:20:10.698 "base_bdevs_list": [ 00:20:10.698 { 00:20:10.698 "name": "BaseBdev1", 00:20:10.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.698 "is_configured": false, 00:20:10.698 "data_offset": 0, 00:20:10.698 "data_size": 0 00:20:10.698 }, 00:20:10.698 { 00:20:10.698 "name": "BaseBdev2", 00:20:10.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.698 "is_configured": false, 00:20:10.698 "data_offset": 0, 00:20:10.698 "data_size": 0 00:20:10.698 }, 00:20:10.698 { 00:20:10.698 "name": "BaseBdev3", 00:20:10.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.698 "is_configured": false, 00:20:10.698 "data_offset": 0, 00:20:10.698 "data_size": 0 00:20:10.698 }, 00:20:10.698 { 00:20:10.698 "name": "BaseBdev4", 00:20:10.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.698 "is_configured": false, 00:20:10.698 "data_offset": 0, 00:20:10.698 "data_size": 0 00:20:10.698 } 00:20:10.698 ] 00:20:10.698 }' 00:20:10.698 15:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.698 15:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.957 15:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:11.216 [2024-07-23 15:14:06.444779] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:11.216 [2024-07-23 15:14:06.445018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:20:11.216 15:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:11.475 [2024-07-23 15:14:06.704896] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:11.475 [2024-07-23 15:14:06.705091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:11.475 [2024-07-23 15:14:06.705112] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.475 [2024-07-23 15:14:06.705127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.475 [2024-07-23 15:14:06.705135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:11.475 [2024-07-23 15:14:06.705148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:11.475 [2024-07-23 15:14:06.705155] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:11.475 [2024-07-23 15:14:06.705168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:11.475 15:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:11.475 BaseBdev1 00:20:11.475 [2024-07-23 15:14:06.894758] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.734 15:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:11.734 15:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:11.734 15:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:11.734 15:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:11.734 15:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:11.734 15:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:11.734 15:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.734 15:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:11.993 [ 00:20:11.993 { 00:20:11.993 "name": "BaseBdev1", 00:20:11.993 "aliases": [ 00:20:11.993 "d6ff91f0-40e0-4952-9e05-3ceee9bf0140" 00:20:11.993 ], 00:20:11.993 "product_name": "Malloc disk", 00:20:11.993 "block_size": 512, 00:20:11.993 "num_blocks": 65536, 00:20:11.993 "uuid": "d6ff91f0-40e0-4952-9e05-3ceee9bf0140", 00:20:11.993 "assigned_rate_limits": { 00:20:11.993 "rw_ios_per_sec": 0, 00:20:11.993 "rw_mbytes_per_sec": 0, 00:20:11.993 "r_mbytes_per_sec": 0, 00:20:11.993 "w_mbytes_per_sec": 0 00:20:11.994 }, 00:20:11.994 "claimed": true, 00:20:11.994 "claim_type": "exclusive_write", 00:20:11.994 "zoned": false, 00:20:11.994 "supported_io_types": { 00:20:11.994 "read": true, 00:20:11.994 "write": true, 00:20:11.994 "unmap": true, 00:20:11.994 "flush": true, 00:20:11.994 "reset": true, 00:20:11.994 "nvme_admin": false, 00:20:11.994 "nvme_io": false, 00:20:11.994 "nvme_io_md": false, 00:20:11.994 "write_zeroes": true, 00:20:11.994 "zcopy": true, 00:20:11.994 "get_zone_info": false, 00:20:11.994 "zone_management": false, 00:20:11.994 "zone_append": false, 00:20:11.994 "compare": false, 00:20:11.994 "compare_and_write": false, 00:20:11.994 "abort": true, 00:20:11.994 "seek_hole": false, 00:20:11.994 "seek_data": false, 00:20:11.994 "copy": true, 00:20:11.994 "nvme_iov_md": false 00:20:11.994 }, 00:20:11.994 "memory_domains": [ 00:20:11.994 { 00:20:11.994 "dma_device_id": "system", 00:20:11.994 "dma_device_type": 1 00:20:11.994 }, 00:20:11.994 { 00:20:11.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.994 "dma_device_type": 2 00:20:11.994 } 00:20:11.994 ], 00:20:11.994 "driver_specific": {} 00:20:11.994 } 00:20:11.994 ] 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.994 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.253 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.253 "name": "Existed_Raid", 00:20:12.253 "uuid": "48b19725-cb8c-4cb7-a565-4cf1708cf516", 00:20:12.253 "strip_size_kb": 64, 00:20:12.253 "state": "configuring", 00:20:12.253 "raid_level": "raid0", 00:20:12.253 "superblock": true, 00:20:12.253 "num_base_bdevs": 4, 00:20:12.253 "num_base_bdevs_discovered": 1, 00:20:12.253 "num_base_bdevs_operational": 4, 00:20:12.253 "base_bdevs_list": [ 00:20:12.253 { 00:20:12.253 "name": "BaseBdev1", 00:20:12.253 "uuid": "d6ff91f0-40e0-4952-9e05-3ceee9bf0140", 00:20:12.253 "is_configured": true, 00:20:12.253 "data_offset": 2048, 00:20:12.253 "data_size": 63488 00:20:12.253 }, 00:20:12.253 { 00:20:12.253 "name": "BaseBdev2", 00:20:12.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.253 "is_configured": false, 00:20:12.253 "data_offset": 0, 00:20:12.253 "data_size": 0 00:20:12.253 }, 00:20:12.253 { 00:20:12.253 "name": "BaseBdev3", 00:20:12.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.253 "is_configured": false, 00:20:12.253 "data_offset": 0, 00:20:12.253 "data_size": 0 00:20:12.253 }, 00:20:12.253 { 00:20:12.253 "name": "BaseBdev4", 00:20:12.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.253 "is_configured": false, 00:20:12.253 "data_offset": 0, 00:20:12.253 "data_size": 0 00:20:12.253 } 00:20:12.253 ] 00:20:12.253 }' 00:20:12.253 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.253 15:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.512 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:12.806 [2024-07-23 15:14:07.951120] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:12.806 [2024-07-23 15:14:07.951198] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:20:12.806 15:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:12.806 [2024-07-23 15:14:08.131232] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:12.806 [2024-07-23 15:14:08.133626] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:12.806 [2024-07-23 15:14:08.133691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:12.806 [2024-07-23 15:14:08.133705] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:12.806 [2024-07-23 15:14:08.133718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:12.806 [2024-07-23 15:14:08.133726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:12.806 [2024-07-23 15:14:08.133739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.806 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.066 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.066 "name": "Existed_Raid", 00:20:13.066 "uuid": "307135a4-8b4d-48ca-b947-cf9ecd772933", 00:20:13.066 "strip_size_kb": 64, 00:20:13.066 "state": "configuring", 00:20:13.066 "raid_level": "raid0", 00:20:13.066 "superblock": true, 00:20:13.066 "num_base_bdevs": 4, 00:20:13.066 "num_base_bdevs_discovered": 1, 00:20:13.066 "num_base_bdevs_operational": 4, 00:20:13.066 "base_bdevs_list": [ 00:20:13.066 { 00:20:13.066 "name": "BaseBdev1", 00:20:13.066 "uuid": "d6ff91f0-40e0-4952-9e05-3ceee9bf0140", 00:20:13.066 "is_configured": true, 00:20:13.066 "data_offset": 2048, 00:20:13.066 "data_size": 63488 00:20:13.066 }, 00:20:13.066 { 00:20:13.066 "name": "BaseBdev2", 00:20:13.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.066 "is_configured": false, 00:20:13.066 "data_offset": 0, 00:20:13.066 "data_size": 0 00:20:13.066 }, 00:20:13.066 { 00:20:13.066 "name": "BaseBdev3", 00:20:13.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.066 "is_configured": false, 00:20:13.066 "data_offset": 0, 00:20:13.066 "data_size": 0 00:20:13.066 }, 00:20:13.066 { 00:20:13.066 "name": "BaseBdev4", 00:20:13.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.066 "is_configured": false, 00:20:13.066 "data_offset": 0, 00:20:13.066 "data_size": 0 00:20:13.066 } 00:20:13.066 ] 00:20:13.066 }' 00:20:13.066 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.066 15:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.325 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.585 [2024-07-23 15:14:08.838498] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.585 BaseBdev2 00:20:13.585 15:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:13.585 15:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:13.585 15:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:13.585 15:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:13.585 15:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:13.585 15:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:13.585 15:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:13.844 15:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:14.103 [ 00:20:14.103 { 00:20:14.103 "name": "BaseBdev2", 00:20:14.103 "aliases": [ 00:20:14.103 "f2009ab9-5334-44dd-a393-7961c4cd65e1" 00:20:14.103 ], 00:20:14.103 "product_name": "Malloc disk", 00:20:14.103 "block_size": 512, 00:20:14.103 "num_blocks": 65536, 00:20:14.103 "uuid": "f2009ab9-5334-44dd-a393-7961c4cd65e1", 00:20:14.103 "assigned_rate_limits": { 00:20:14.103 "rw_ios_per_sec": 0, 00:20:14.103 "rw_mbytes_per_sec": 0, 00:20:14.103 "r_mbytes_per_sec": 0, 00:20:14.103 "w_mbytes_per_sec": 0 00:20:14.103 }, 00:20:14.103 "claimed": true, 00:20:14.103 "claim_type": "exclusive_write", 00:20:14.103 "zoned": false, 00:20:14.103 "supported_io_types": { 00:20:14.103 "read": true, 00:20:14.103 "write": true, 00:20:14.103 "unmap": true, 00:20:14.103 "flush": true, 00:20:14.103 "reset": true, 00:20:14.103 "nvme_admin": false, 00:20:14.103 "nvme_io": false, 00:20:14.103 "nvme_io_md": false, 00:20:14.103 "write_zeroes": true, 00:20:14.103 "zcopy": true, 00:20:14.103 "get_zone_info": false, 00:20:14.103 "zone_management": false, 00:20:14.103 "zone_append": false, 00:20:14.103 "compare": false, 00:20:14.103 "compare_and_write": false, 00:20:14.103 "abort": true, 00:20:14.103 "seek_hole": false, 00:20:14.103 "seek_data": false, 00:20:14.103 "copy": true, 00:20:14.103 "nvme_iov_md": false 00:20:14.103 }, 00:20:14.103 "memory_domains": [ 00:20:14.103 { 00:20:14.103 "dma_device_id": "system", 00:20:14.103 "dma_device_type": 1 00:20:14.103 }, 00:20:14.103 { 00:20:14.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.103 "dma_device_type": 2 00:20:14.103 } 00:20:14.103 ], 00:20:14.103 "driver_specific": {} 00:20:14.103 } 00:20:14.103 ] 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.103 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.362 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.362 "name": "Existed_Raid", 00:20:14.362 "uuid": "307135a4-8b4d-48ca-b947-cf9ecd772933", 00:20:14.362 "strip_size_kb": 64, 00:20:14.362 "state": "configuring", 00:20:14.362 "raid_level": "raid0", 00:20:14.362 "superblock": true, 00:20:14.362 "num_base_bdevs": 4, 00:20:14.362 "num_base_bdevs_discovered": 2, 00:20:14.362 "num_base_bdevs_operational": 4, 00:20:14.362 "base_bdevs_list": [ 00:20:14.362 { 00:20:14.362 "name": "BaseBdev1", 00:20:14.362 "uuid": "d6ff91f0-40e0-4952-9e05-3ceee9bf0140", 00:20:14.362 "is_configured": true, 00:20:14.362 "data_offset": 2048, 00:20:14.362 "data_size": 63488 00:20:14.362 }, 00:20:14.362 { 00:20:14.362 "name": "BaseBdev2", 00:20:14.362 "uuid": "f2009ab9-5334-44dd-a393-7961c4cd65e1", 00:20:14.362 "is_configured": true, 00:20:14.362 "data_offset": 2048, 00:20:14.362 "data_size": 63488 00:20:14.362 }, 00:20:14.362 { 00:20:14.362 "name": "BaseBdev3", 00:20:14.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.362 "is_configured": false, 00:20:14.362 "data_offset": 0, 00:20:14.362 "data_size": 0 00:20:14.362 }, 00:20:14.362 { 00:20:14.362 "name": "BaseBdev4", 00:20:14.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.362 "is_configured": false, 00:20:14.362 "data_offset": 0, 00:20:14.362 "data_size": 0 00:20:14.362 } 00:20:14.362 ] 00:20:14.362 }' 00:20:14.362 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.362 15:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.621 15:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:14.621 [2024-07-23 15:14:10.038599] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.621 BaseBdev3 00:20:14.880 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:14.880 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:14.880 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:14.880 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:14.880 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:14.880 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:14.880 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.880 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:15.139 [ 00:20:15.139 { 00:20:15.139 "name": "BaseBdev3", 00:20:15.139 "aliases": [ 00:20:15.139 "f03e701e-defb-4e8e-a8f0-01f6b8b04f8a" 00:20:15.139 ], 00:20:15.139 "product_name": "Malloc disk", 00:20:15.139 "block_size": 512, 00:20:15.139 "num_blocks": 65536, 00:20:15.139 "uuid": "f03e701e-defb-4e8e-a8f0-01f6b8b04f8a", 00:20:15.139 "assigned_rate_limits": { 00:20:15.139 "rw_ios_per_sec": 0, 00:20:15.139 "rw_mbytes_per_sec": 0, 00:20:15.139 "r_mbytes_per_sec": 0, 00:20:15.139 "w_mbytes_per_sec": 0 00:20:15.139 }, 00:20:15.139 "claimed": true, 00:20:15.139 "claim_type": "exclusive_write", 00:20:15.139 "zoned": false, 00:20:15.139 "supported_io_types": { 00:20:15.139 "read": true, 00:20:15.139 "write": true, 00:20:15.139 "unmap": true, 00:20:15.139 "flush": true, 00:20:15.139 "reset": true, 00:20:15.139 "nvme_admin": false, 00:20:15.139 "nvme_io": false, 00:20:15.139 "nvme_io_md": false, 00:20:15.139 "write_zeroes": true, 00:20:15.139 "zcopy": true, 00:20:15.139 "get_zone_info": false, 00:20:15.139 "zone_management": false, 00:20:15.139 "zone_append": false, 00:20:15.139 "compare": false, 00:20:15.139 "compare_and_write": false, 00:20:15.139 "abort": true, 00:20:15.139 "seek_hole": false, 00:20:15.139 "seek_data": false, 00:20:15.139 "copy": true, 00:20:15.139 "nvme_iov_md": false 00:20:15.139 }, 00:20:15.139 "memory_domains": [ 00:20:15.139 { 00:20:15.139 "dma_device_id": "system", 00:20:15.139 "dma_device_type": 1 00:20:15.139 }, 00:20:15.139 { 00:20:15.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.139 "dma_device_type": 2 00:20:15.139 } 00:20:15.139 ], 00:20:15.139 "driver_specific": {} 00:20:15.139 } 00:20:15.139 ] 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.139 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.399 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.399 "name": "Existed_Raid", 00:20:15.399 "uuid": "307135a4-8b4d-48ca-b947-cf9ecd772933", 00:20:15.399 "strip_size_kb": 64, 00:20:15.399 "state": "configuring", 00:20:15.399 "raid_level": "raid0", 00:20:15.399 "superblock": true, 00:20:15.399 "num_base_bdevs": 4, 00:20:15.399 "num_base_bdevs_discovered": 3, 00:20:15.399 "num_base_bdevs_operational": 4, 00:20:15.399 "base_bdevs_list": [ 00:20:15.399 { 00:20:15.399 "name": "BaseBdev1", 00:20:15.399 "uuid": "d6ff91f0-40e0-4952-9e05-3ceee9bf0140", 00:20:15.399 "is_configured": true, 00:20:15.399 "data_offset": 2048, 00:20:15.399 "data_size": 63488 00:20:15.399 }, 00:20:15.399 { 00:20:15.399 "name": "BaseBdev2", 00:20:15.399 "uuid": "f2009ab9-5334-44dd-a393-7961c4cd65e1", 00:20:15.399 "is_configured": true, 00:20:15.399 "data_offset": 2048, 00:20:15.399 "data_size": 63488 00:20:15.399 }, 00:20:15.399 { 00:20:15.399 "name": "BaseBdev3", 00:20:15.399 "uuid": "f03e701e-defb-4e8e-a8f0-01f6b8b04f8a", 00:20:15.399 "is_configured": true, 00:20:15.399 "data_offset": 2048, 00:20:15.399 "data_size": 63488 00:20:15.399 }, 00:20:15.399 { 00:20:15.399 "name": "BaseBdev4", 00:20:15.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.399 "is_configured": false, 00:20:15.399 "data_offset": 0, 00:20:15.399 "data_size": 0 00:20:15.399 } 00:20:15.399 ] 00:20:15.399 }' 00:20:15.399 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.399 15:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.659 15:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:15.918 [2024-07-23 15:14:11.250460] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:15.918 [2024-07-23 15:14:11.250956] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:20:15.918 [2024-07-23 15:14:11.250978] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:15.918 [2024-07-23 15:14:11.251101] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:20:15.918 [2024-07-23 15:14:11.251469] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:20:15.918 [2024-07-23 15:14:11.251487] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:20:15.918 [2024-07-23 15:14:11.251600] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.918 BaseBdev4 00:20:15.918 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:15.918 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:20:15.918 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:15.918 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:15.918 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:15.918 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:15.918 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.178 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:16.437 [ 00:20:16.437 { 00:20:16.437 "name": "BaseBdev4", 00:20:16.437 "aliases": [ 00:20:16.437 "a7d6e110-3fac-4380-a53c-491365b02a5a" 00:20:16.437 ], 00:20:16.437 "product_name": "Malloc disk", 00:20:16.437 "block_size": 512, 00:20:16.437 "num_blocks": 65536, 00:20:16.437 "uuid": "a7d6e110-3fac-4380-a53c-491365b02a5a", 00:20:16.437 "assigned_rate_limits": { 00:20:16.437 "rw_ios_per_sec": 0, 00:20:16.437 "rw_mbytes_per_sec": 0, 00:20:16.437 "r_mbytes_per_sec": 0, 00:20:16.437 "w_mbytes_per_sec": 0 00:20:16.437 }, 00:20:16.437 "claimed": true, 00:20:16.437 "claim_type": "exclusive_write", 00:20:16.437 "zoned": false, 00:20:16.437 "supported_io_types": { 00:20:16.437 "read": true, 00:20:16.437 "write": true, 00:20:16.437 "unmap": true, 00:20:16.437 "flush": true, 00:20:16.437 "reset": true, 00:20:16.437 "nvme_admin": false, 00:20:16.437 "nvme_io": false, 00:20:16.437 "nvme_io_md": false, 00:20:16.437 "write_zeroes": true, 00:20:16.437 "zcopy": true, 00:20:16.437 "get_zone_info": false, 00:20:16.437 "zone_management": false, 00:20:16.437 "zone_append": false, 00:20:16.437 "compare": false, 00:20:16.437 "compare_and_write": false, 00:20:16.437 "abort": true, 00:20:16.438 "seek_hole": false, 00:20:16.438 "seek_data": false, 00:20:16.438 "copy": true, 00:20:16.438 "nvme_iov_md": false 00:20:16.438 }, 00:20:16.438 "memory_domains": [ 00:20:16.438 { 00:20:16.438 "dma_device_id": "system", 00:20:16.438 "dma_device_type": 1 00:20:16.438 }, 00:20:16.438 { 00:20:16.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.438 "dma_device_type": 2 00:20:16.438 } 00:20:16.438 ], 00:20:16.438 "driver_specific": {} 00:20:16.438 } 00:20:16.438 ] 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.438 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.697 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.697 "name": "Existed_Raid", 00:20:16.697 "uuid": "307135a4-8b4d-48ca-b947-cf9ecd772933", 00:20:16.697 "strip_size_kb": 64, 00:20:16.697 "state": "online", 00:20:16.697 "raid_level": "raid0", 00:20:16.697 "superblock": true, 00:20:16.697 "num_base_bdevs": 4, 00:20:16.697 "num_base_bdevs_discovered": 4, 00:20:16.697 "num_base_bdevs_operational": 4, 00:20:16.697 "base_bdevs_list": [ 00:20:16.697 { 00:20:16.697 "name": "BaseBdev1", 00:20:16.697 "uuid": "d6ff91f0-40e0-4952-9e05-3ceee9bf0140", 00:20:16.697 "is_configured": true, 00:20:16.697 "data_offset": 2048, 00:20:16.697 "data_size": 63488 00:20:16.697 }, 00:20:16.697 { 00:20:16.697 "name": "BaseBdev2", 00:20:16.697 "uuid": "f2009ab9-5334-44dd-a393-7961c4cd65e1", 00:20:16.697 "is_configured": true, 00:20:16.697 "data_offset": 2048, 00:20:16.697 "data_size": 63488 00:20:16.697 }, 00:20:16.697 { 00:20:16.697 "name": "BaseBdev3", 00:20:16.697 "uuid": "f03e701e-defb-4e8e-a8f0-01f6b8b04f8a", 00:20:16.697 "is_configured": true, 00:20:16.697 "data_offset": 2048, 00:20:16.697 "data_size": 63488 00:20:16.697 }, 00:20:16.697 { 00:20:16.697 "name": "BaseBdev4", 00:20:16.697 "uuid": "a7d6e110-3fac-4380-a53c-491365b02a5a", 00:20:16.697 "is_configured": true, 00:20:16.697 "data_offset": 2048, 00:20:16.697 "data_size": 63488 00:20:16.697 } 00:20:16.697 ] 00:20:16.697 }' 00:20:16.697 15:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.697 15:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:16.957 [2024-07-23 15:14:12.315120] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:16.957 "name": "Existed_Raid", 00:20:16.957 "aliases": [ 00:20:16.957 "307135a4-8b4d-48ca-b947-cf9ecd772933" 00:20:16.957 ], 00:20:16.957 "product_name": "Raid Volume", 00:20:16.957 "block_size": 512, 00:20:16.957 "num_blocks": 253952, 00:20:16.957 "uuid": "307135a4-8b4d-48ca-b947-cf9ecd772933", 00:20:16.957 "assigned_rate_limits": { 00:20:16.957 "rw_ios_per_sec": 0, 00:20:16.957 "rw_mbytes_per_sec": 0, 00:20:16.957 "r_mbytes_per_sec": 0, 00:20:16.957 "w_mbytes_per_sec": 0 00:20:16.957 }, 00:20:16.957 "claimed": false, 00:20:16.957 "zoned": false, 00:20:16.957 "supported_io_types": { 00:20:16.957 "read": true, 00:20:16.957 "write": true, 00:20:16.957 "unmap": true, 00:20:16.957 "flush": true, 00:20:16.957 "reset": true, 00:20:16.957 "nvme_admin": false, 00:20:16.957 "nvme_io": false, 00:20:16.957 "nvme_io_md": false, 00:20:16.957 "write_zeroes": true, 00:20:16.957 "zcopy": false, 00:20:16.957 "get_zone_info": false, 00:20:16.957 "zone_management": false, 00:20:16.957 "zone_append": false, 00:20:16.957 "compare": false, 00:20:16.957 "compare_and_write": false, 00:20:16.957 "abort": false, 00:20:16.957 "seek_hole": false, 00:20:16.957 "seek_data": false, 00:20:16.957 "copy": false, 00:20:16.957 "nvme_iov_md": false 00:20:16.957 }, 00:20:16.957 "memory_domains": [ 00:20:16.957 { 00:20:16.957 "dma_device_id": "system", 00:20:16.957 "dma_device_type": 1 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.957 "dma_device_type": 2 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "dma_device_id": "system", 00:20:16.957 "dma_device_type": 1 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.957 "dma_device_type": 2 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "dma_device_id": "system", 00:20:16.957 "dma_device_type": 1 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.957 "dma_device_type": 2 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "dma_device_id": "system", 00:20:16.957 "dma_device_type": 1 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.957 "dma_device_type": 2 00:20:16.957 } 00:20:16.957 ], 00:20:16.957 "driver_specific": { 00:20:16.957 "raid": { 00:20:16.957 "uuid": "307135a4-8b4d-48ca-b947-cf9ecd772933", 00:20:16.957 "strip_size_kb": 64, 00:20:16.957 "state": "online", 00:20:16.957 "raid_level": "raid0", 00:20:16.957 "superblock": true, 00:20:16.957 "num_base_bdevs": 4, 00:20:16.957 "num_base_bdevs_discovered": 4, 00:20:16.957 "num_base_bdevs_operational": 4, 00:20:16.957 "base_bdevs_list": [ 00:20:16.957 { 00:20:16.957 "name": "BaseBdev1", 00:20:16.957 "uuid": "d6ff91f0-40e0-4952-9e05-3ceee9bf0140", 00:20:16.957 "is_configured": true, 00:20:16.957 "data_offset": 2048, 00:20:16.957 "data_size": 63488 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "name": "BaseBdev2", 00:20:16.957 "uuid": "f2009ab9-5334-44dd-a393-7961c4cd65e1", 00:20:16.957 "is_configured": true, 00:20:16.957 "data_offset": 2048, 00:20:16.957 "data_size": 63488 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "name": "BaseBdev3", 00:20:16.957 "uuid": "f03e701e-defb-4e8e-a8f0-01f6b8b04f8a", 00:20:16.957 "is_configured": true, 00:20:16.957 "data_offset": 2048, 00:20:16.957 "data_size": 63488 00:20:16.957 }, 00:20:16.957 { 00:20:16.957 "name": "BaseBdev4", 00:20:16.957 "uuid": "a7d6e110-3fac-4380-a53c-491365b02a5a", 00:20:16.957 "is_configured": true, 00:20:16.957 "data_offset": 2048, 00:20:16.957 "data_size": 63488 00:20:16.957 } 00:20:16.957 ] 00:20:16.957 } 00:20:16.957 } 00:20:16.957 }' 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:16.957 BaseBdev2 00:20:16.957 BaseBdev3 00:20:16.957 BaseBdev4' 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:16.957 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:17.217 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:17.217 "name": "BaseBdev1", 00:20:17.217 "aliases": [ 00:20:17.217 "d6ff91f0-40e0-4952-9e05-3ceee9bf0140" 00:20:17.217 ], 00:20:17.217 "product_name": "Malloc disk", 00:20:17.217 "block_size": 512, 00:20:17.217 "num_blocks": 65536, 00:20:17.217 "uuid": "d6ff91f0-40e0-4952-9e05-3ceee9bf0140", 00:20:17.217 "assigned_rate_limits": { 00:20:17.217 "rw_ios_per_sec": 0, 00:20:17.217 "rw_mbytes_per_sec": 0, 00:20:17.217 "r_mbytes_per_sec": 0, 00:20:17.217 "w_mbytes_per_sec": 0 00:20:17.217 }, 00:20:17.217 "claimed": true, 00:20:17.217 "claim_type": "exclusive_write", 00:20:17.217 "zoned": false, 00:20:17.217 "supported_io_types": { 00:20:17.217 "read": true, 00:20:17.217 "write": true, 00:20:17.217 "unmap": true, 00:20:17.217 "flush": true, 00:20:17.217 "reset": true, 00:20:17.217 "nvme_admin": false, 00:20:17.217 "nvme_io": false, 00:20:17.217 "nvme_io_md": false, 00:20:17.217 "write_zeroes": true, 00:20:17.217 "zcopy": true, 00:20:17.217 "get_zone_info": false, 00:20:17.217 "zone_management": false, 00:20:17.217 "zone_append": false, 00:20:17.217 "compare": false, 00:20:17.217 "compare_and_write": false, 00:20:17.217 "abort": true, 00:20:17.217 "seek_hole": false, 00:20:17.217 "seek_data": false, 00:20:17.217 "copy": true, 00:20:17.217 "nvme_iov_md": false 00:20:17.217 }, 00:20:17.217 "memory_domains": [ 00:20:17.217 { 00:20:17.217 "dma_device_id": "system", 00:20:17.217 "dma_device_type": 1 00:20:17.217 }, 00:20:17.217 { 00:20:17.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.217 "dma_device_type": 2 00:20:17.217 } 00:20:17.217 ], 00:20:17.217 "driver_specific": {} 00:20:17.217 }' 00:20:17.217 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.217 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.217 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:17.217 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:17.477 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:17.735 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:17.735 "name": "BaseBdev2", 00:20:17.735 "aliases": [ 00:20:17.735 "f2009ab9-5334-44dd-a393-7961c4cd65e1" 00:20:17.735 ], 00:20:17.735 "product_name": "Malloc disk", 00:20:17.735 "block_size": 512, 00:20:17.736 "num_blocks": 65536, 00:20:17.736 "uuid": "f2009ab9-5334-44dd-a393-7961c4cd65e1", 00:20:17.736 "assigned_rate_limits": { 00:20:17.736 "rw_ios_per_sec": 0, 00:20:17.736 "rw_mbytes_per_sec": 0, 00:20:17.736 "r_mbytes_per_sec": 0, 00:20:17.736 "w_mbytes_per_sec": 0 00:20:17.736 }, 00:20:17.736 "claimed": true, 00:20:17.736 "claim_type": "exclusive_write", 00:20:17.736 "zoned": false, 00:20:17.736 "supported_io_types": { 00:20:17.736 "read": true, 00:20:17.736 "write": true, 00:20:17.736 "unmap": true, 00:20:17.736 "flush": true, 00:20:17.736 "reset": true, 00:20:17.736 "nvme_admin": false, 00:20:17.736 "nvme_io": false, 00:20:17.736 "nvme_io_md": false, 00:20:17.736 "write_zeroes": true, 00:20:17.736 "zcopy": true, 00:20:17.736 "get_zone_info": false, 00:20:17.736 "zone_management": false, 00:20:17.736 "zone_append": false, 00:20:17.736 "compare": false, 00:20:17.736 "compare_and_write": false, 00:20:17.736 "abort": true, 00:20:17.736 "seek_hole": false, 00:20:17.736 "seek_data": false, 00:20:17.736 "copy": true, 00:20:17.736 "nvme_iov_md": false 00:20:17.736 }, 00:20:17.736 "memory_domains": [ 00:20:17.736 { 00:20:17.736 "dma_device_id": "system", 00:20:17.736 "dma_device_type": 1 00:20:17.736 }, 00:20:17.736 { 00:20:17.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.736 "dma_device_type": 2 00:20:17.736 } 00:20:17.736 ], 00:20:17.736 "driver_specific": {} 00:20:17.736 }' 00:20:17.736 15:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:17.736 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:17.994 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:17.994 "name": "BaseBdev3", 00:20:17.994 "aliases": [ 00:20:17.994 "f03e701e-defb-4e8e-a8f0-01f6b8b04f8a" 00:20:17.994 ], 00:20:17.994 "product_name": "Malloc disk", 00:20:17.994 "block_size": 512, 00:20:17.994 "num_blocks": 65536, 00:20:17.994 "uuid": "f03e701e-defb-4e8e-a8f0-01f6b8b04f8a", 00:20:17.994 "assigned_rate_limits": { 00:20:17.994 "rw_ios_per_sec": 0, 00:20:17.994 "rw_mbytes_per_sec": 0, 00:20:17.994 "r_mbytes_per_sec": 0, 00:20:17.994 "w_mbytes_per_sec": 0 00:20:17.994 }, 00:20:17.994 "claimed": true, 00:20:17.994 "claim_type": "exclusive_write", 00:20:17.994 "zoned": false, 00:20:17.994 "supported_io_types": { 00:20:17.994 "read": true, 00:20:17.994 "write": true, 00:20:17.994 "unmap": true, 00:20:17.994 "flush": true, 00:20:17.994 "reset": true, 00:20:17.994 "nvme_admin": false, 00:20:17.994 "nvme_io": false, 00:20:17.994 "nvme_io_md": false, 00:20:17.994 "write_zeroes": true, 00:20:17.994 "zcopy": true, 00:20:17.994 "get_zone_info": false, 00:20:17.994 "zone_management": false, 00:20:17.994 "zone_append": false, 00:20:17.994 "compare": false, 00:20:17.994 "compare_and_write": false, 00:20:17.994 "abort": true, 00:20:17.994 "seek_hole": false, 00:20:17.994 "seek_data": false, 00:20:17.994 "copy": true, 00:20:17.994 "nvme_iov_md": false 00:20:17.994 }, 00:20:17.994 "memory_domains": [ 00:20:17.994 { 00:20:17.994 "dma_device_id": "system", 00:20:17.994 "dma_device_type": 1 00:20:17.994 }, 00:20:17.994 { 00:20:17.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.994 "dma_device_type": 2 00:20:17.994 } 00:20:17.994 ], 00:20:17.994 "driver_specific": {} 00:20:17.994 }' 00:20:17.994 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.994 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.994 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:17.994 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.994 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.251 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:18.251 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.251 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.251 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:18.252 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.252 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.252 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:18.252 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:18.252 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:18.252 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:18.509 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:18.510 "name": "BaseBdev4", 00:20:18.510 "aliases": [ 00:20:18.510 "a7d6e110-3fac-4380-a53c-491365b02a5a" 00:20:18.510 ], 00:20:18.510 "product_name": "Malloc disk", 00:20:18.510 "block_size": 512, 00:20:18.510 "num_blocks": 65536, 00:20:18.510 "uuid": "a7d6e110-3fac-4380-a53c-491365b02a5a", 00:20:18.510 "assigned_rate_limits": { 00:20:18.510 "rw_ios_per_sec": 0, 00:20:18.510 "rw_mbytes_per_sec": 0, 00:20:18.510 "r_mbytes_per_sec": 0, 00:20:18.510 "w_mbytes_per_sec": 0 00:20:18.510 }, 00:20:18.510 "claimed": true, 00:20:18.510 "claim_type": "exclusive_write", 00:20:18.510 "zoned": false, 00:20:18.510 "supported_io_types": { 00:20:18.510 "read": true, 00:20:18.510 "write": true, 00:20:18.510 "unmap": true, 00:20:18.510 "flush": true, 00:20:18.510 "reset": true, 00:20:18.510 "nvme_admin": false, 00:20:18.510 "nvme_io": false, 00:20:18.510 "nvme_io_md": false, 00:20:18.510 "write_zeroes": true, 00:20:18.510 "zcopy": true, 00:20:18.510 "get_zone_info": false, 00:20:18.510 "zone_management": false, 00:20:18.510 "zone_append": false, 00:20:18.510 "compare": false, 00:20:18.510 "compare_and_write": false, 00:20:18.510 "abort": true, 00:20:18.510 "seek_hole": false, 00:20:18.510 "seek_data": false, 00:20:18.510 "copy": true, 00:20:18.510 "nvme_iov_md": false 00:20:18.510 }, 00:20:18.510 "memory_domains": [ 00:20:18.510 { 00:20:18.510 "dma_device_id": "system", 00:20:18.510 "dma_device_type": 1 00:20:18.510 }, 00:20:18.510 { 00:20:18.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.510 "dma_device_type": 2 00:20:18.510 } 00:20:18.510 ], 00:20:18.510 "driver_specific": {} 00:20:18.510 }' 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:18.510 15:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:18.769 [2024-07-23 15:14:14.099252] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:18.769 [2024-07-23 15:14:14.099300] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:18.769 [2024-07-23 15:14:14.099379] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.769 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.028 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.028 "name": "Existed_Raid", 00:20:19.028 "uuid": "307135a4-8b4d-48ca-b947-cf9ecd772933", 00:20:19.028 "strip_size_kb": 64, 00:20:19.028 "state": "offline", 00:20:19.028 "raid_level": "raid0", 00:20:19.028 "superblock": true, 00:20:19.028 "num_base_bdevs": 4, 00:20:19.028 "num_base_bdevs_discovered": 3, 00:20:19.028 "num_base_bdevs_operational": 3, 00:20:19.028 "base_bdevs_list": [ 00:20:19.028 { 00:20:19.028 "name": null, 00:20:19.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.028 "is_configured": false, 00:20:19.028 "data_offset": 2048, 00:20:19.028 "data_size": 63488 00:20:19.028 }, 00:20:19.028 { 00:20:19.028 "name": "BaseBdev2", 00:20:19.028 "uuid": "f2009ab9-5334-44dd-a393-7961c4cd65e1", 00:20:19.028 "is_configured": true, 00:20:19.028 "data_offset": 2048, 00:20:19.028 "data_size": 63488 00:20:19.028 }, 00:20:19.028 { 00:20:19.028 "name": "BaseBdev3", 00:20:19.028 "uuid": "f03e701e-defb-4e8e-a8f0-01f6b8b04f8a", 00:20:19.028 "is_configured": true, 00:20:19.028 "data_offset": 2048, 00:20:19.028 "data_size": 63488 00:20:19.028 }, 00:20:19.028 { 00:20:19.028 "name": "BaseBdev4", 00:20:19.028 "uuid": "a7d6e110-3fac-4380-a53c-491365b02a5a", 00:20:19.028 "is_configured": true, 00:20:19.028 "data_offset": 2048, 00:20:19.028 "data_size": 63488 00:20:19.028 } 00:20:19.028 ] 00:20:19.028 }' 00:20:19.028 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.028 15:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.595 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:19.595 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:19.595 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:19.595 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.595 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:19.595 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.595 15:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:19.854 [2024-07-23 15:14:15.244306] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:19.854 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:19.854 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:19.854 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.854 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:20.121 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:20.121 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:20.121 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:20.390 [2024-07-23 15:14:15.620965] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:20.390 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:20.390 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:20.390 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:20.390 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.649 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:20.649 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:20.649 15:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:20.649 [2024-07-23 15:14:16.053466] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:20.649 [2024-07-23 15:14:16.053954] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:20:20.907 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:20.907 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:20.907 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:20.907 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.907 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:20.907 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:20.907 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:20.907 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:20.908 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:20.908 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:21.166 BaseBdev2 00:20:21.166 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:21.166 15:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:21.166 15:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:21.166 15:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:21.166 15:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:21.166 15:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:21.167 15:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:21.425 15:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:21.425 [ 00:20:21.425 { 00:20:21.425 "name": "BaseBdev2", 00:20:21.425 "aliases": [ 00:20:21.425 "557272bc-8482-4dfd-837e-e893f32fefa9" 00:20:21.425 ], 00:20:21.425 "product_name": "Malloc disk", 00:20:21.425 "block_size": 512, 00:20:21.425 "num_blocks": 65536, 00:20:21.425 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:21.425 "assigned_rate_limits": { 00:20:21.425 "rw_ios_per_sec": 0, 00:20:21.425 "rw_mbytes_per_sec": 0, 00:20:21.425 "r_mbytes_per_sec": 0, 00:20:21.425 "w_mbytes_per_sec": 0 00:20:21.425 }, 00:20:21.425 "claimed": false, 00:20:21.425 "zoned": false, 00:20:21.425 "supported_io_types": { 00:20:21.425 "read": true, 00:20:21.425 "write": true, 00:20:21.425 "unmap": true, 00:20:21.425 "flush": true, 00:20:21.425 "reset": true, 00:20:21.425 "nvme_admin": false, 00:20:21.425 "nvme_io": false, 00:20:21.425 "nvme_io_md": false, 00:20:21.425 "write_zeroes": true, 00:20:21.425 "zcopy": true, 00:20:21.425 "get_zone_info": false, 00:20:21.425 "zone_management": false, 00:20:21.425 "zone_append": false, 00:20:21.425 "compare": false, 00:20:21.425 "compare_and_write": false, 00:20:21.425 "abort": true, 00:20:21.425 "seek_hole": false, 00:20:21.425 "seek_data": false, 00:20:21.425 "copy": true, 00:20:21.425 "nvme_iov_md": false 00:20:21.425 }, 00:20:21.425 "memory_domains": [ 00:20:21.425 { 00:20:21.425 "dma_device_id": "system", 00:20:21.425 "dma_device_type": 1 00:20:21.425 }, 00:20:21.425 { 00:20:21.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.425 "dma_device_type": 2 00:20:21.425 } 00:20:21.425 ], 00:20:21.425 "driver_specific": {} 00:20:21.425 } 00:20:21.425 ] 00:20:21.684 15:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:21.684 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:21.684 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:21.684 15:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:21.684 BaseBdev3 00:20:21.684 15:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:21.684 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:21.684 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:21.684 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:21.684 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:21.684 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:21.684 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:21.942 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:22.201 [ 00:20:22.201 { 00:20:22.201 "name": "BaseBdev3", 00:20:22.201 "aliases": [ 00:20:22.201 "1baa2d4f-d548-4692-ae0b-69c1d92a0674" 00:20:22.201 ], 00:20:22.201 "product_name": "Malloc disk", 00:20:22.201 "block_size": 512, 00:20:22.201 "num_blocks": 65536, 00:20:22.201 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:22.201 "assigned_rate_limits": { 00:20:22.201 "rw_ios_per_sec": 0, 00:20:22.201 "rw_mbytes_per_sec": 0, 00:20:22.201 "r_mbytes_per_sec": 0, 00:20:22.201 "w_mbytes_per_sec": 0 00:20:22.201 }, 00:20:22.201 "claimed": false, 00:20:22.201 "zoned": false, 00:20:22.201 "supported_io_types": { 00:20:22.201 "read": true, 00:20:22.201 "write": true, 00:20:22.201 "unmap": true, 00:20:22.201 "flush": true, 00:20:22.201 "reset": true, 00:20:22.201 "nvme_admin": false, 00:20:22.201 "nvme_io": false, 00:20:22.201 "nvme_io_md": false, 00:20:22.201 "write_zeroes": true, 00:20:22.201 "zcopy": true, 00:20:22.201 "get_zone_info": false, 00:20:22.201 "zone_management": false, 00:20:22.201 "zone_append": false, 00:20:22.201 "compare": false, 00:20:22.201 "compare_and_write": false, 00:20:22.201 "abort": true, 00:20:22.201 "seek_hole": false, 00:20:22.201 "seek_data": false, 00:20:22.201 "copy": true, 00:20:22.202 "nvme_iov_md": false 00:20:22.202 }, 00:20:22.202 "memory_domains": [ 00:20:22.202 { 00:20:22.202 "dma_device_id": "system", 00:20:22.202 "dma_device_type": 1 00:20:22.202 }, 00:20:22.202 { 00:20:22.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.202 "dma_device_type": 2 00:20:22.202 } 00:20:22.202 ], 00:20:22.202 "driver_specific": {} 00:20:22.202 } 00:20:22.202 ] 00:20:22.202 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:22.202 15:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:22.202 15:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:22.202 15:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:22.202 BaseBdev4 00:20:22.461 15:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:22.461 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:20:22.461 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:22.461 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:22.461 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:22.461 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:22.461 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:22.461 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:22.720 [ 00:20:22.720 { 00:20:22.720 "name": "BaseBdev4", 00:20:22.720 "aliases": [ 00:20:22.720 "ea23a379-eef8-43d0-8c54-edb9fe6a7e38" 00:20:22.720 ], 00:20:22.720 "product_name": "Malloc disk", 00:20:22.720 "block_size": 512, 00:20:22.720 "num_blocks": 65536, 00:20:22.720 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:22.720 "assigned_rate_limits": { 00:20:22.720 "rw_ios_per_sec": 0, 00:20:22.720 "rw_mbytes_per_sec": 0, 00:20:22.720 "r_mbytes_per_sec": 0, 00:20:22.720 "w_mbytes_per_sec": 0 00:20:22.720 }, 00:20:22.720 "claimed": false, 00:20:22.720 "zoned": false, 00:20:22.720 "supported_io_types": { 00:20:22.720 "read": true, 00:20:22.720 "write": true, 00:20:22.720 "unmap": true, 00:20:22.720 "flush": true, 00:20:22.720 "reset": true, 00:20:22.720 "nvme_admin": false, 00:20:22.720 "nvme_io": false, 00:20:22.720 "nvme_io_md": false, 00:20:22.720 "write_zeroes": true, 00:20:22.720 "zcopy": true, 00:20:22.720 "get_zone_info": false, 00:20:22.720 "zone_management": false, 00:20:22.720 "zone_append": false, 00:20:22.720 "compare": false, 00:20:22.720 "compare_and_write": false, 00:20:22.720 "abort": true, 00:20:22.720 "seek_hole": false, 00:20:22.720 "seek_data": false, 00:20:22.720 "copy": true, 00:20:22.720 "nvme_iov_md": false 00:20:22.720 }, 00:20:22.720 "memory_domains": [ 00:20:22.720 { 00:20:22.720 "dma_device_id": "system", 00:20:22.720 "dma_device_type": 1 00:20:22.720 }, 00:20:22.720 { 00:20:22.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.720 "dma_device_type": 2 00:20:22.720 } 00:20:22.720 ], 00:20:22.720 "driver_specific": {} 00:20:22.720 } 00:20:22.720 ] 00:20:22.720 15:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:22.720 15:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:22.720 15:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:22.720 15:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:22.980 [2024-07-23 15:14:18.175766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:22.980 [2024-07-23 15:14:18.176568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:22.980 [2024-07-23 15:14:18.176630] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:22.980 [2024-07-23 15:14:18.179088] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:22.980 [2024-07-23 15:14:18.179420] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.980 "name": "Existed_Raid", 00:20:22.980 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:22.980 "strip_size_kb": 64, 00:20:22.980 "state": "configuring", 00:20:22.980 "raid_level": "raid0", 00:20:22.980 "superblock": true, 00:20:22.980 "num_base_bdevs": 4, 00:20:22.980 "num_base_bdevs_discovered": 3, 00:20:22.980 "num_base_bdevs_operational": 4, 00:20:22.980 "base_bdevs_list": [ 00:20:22.980 { 00:20:22.980 "name": "BaseBdev1", 00:20:22.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.980 "is_configured": false, 00:20:22.980 "data_offset": 0, 00:20:22.980 "data_size": 0 00:20:22.980 }, 00:20:22.980 { 00:20:22.980 "name": "BaseBdev2", 00:20:22.980 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:22.980 "is_configured": true, 00:20:22.980 "data_offset": 2048, 00:20:22.980 "data_size": 63488 00:20:22.980 }, 00:20:22.980 { 00:20:22.980 "name": "BaseBdev3", 00:20:22.980 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:22.980 "is_configured": true, 00:20:22.980 "data_offset": 2048, 00:20:22.980 "data_size": 63488 00:20:22.980 }, 00:20:22.980 { 00:20:22.980 "name": "BaseBdev4", 00:20:22.980 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:22.980 "is_configured": true, 00:20:22.980 "data_offset": 2048, 00:20:22.980 "data_size": 63488 00:20:22.980 } 00:20:22.980 ] 00:20:22.980 }' 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.980 15:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:23.549 [2024-07-23 15:14:18.875971] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.549 15:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.808 15:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.808 "name": "Existed_Raid", 00:20:23.808 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:23.808 "strip_size_kb": 64, 00:20:23.808 "state": "configuring", 00:20:23.808 "raid_level": "raid0", 00:20:23.808 "superblock": true, 00:20:23.808 "num_base_bdevs": 4, 00:20:23.808 "num_base_bdevs_discovered": 2, 00:20:23.808 "num_base_bdevs_operational": 4, 00:20:23.808 "base_bdevs_list": [ 00:20:23.808 { 00:20:23.808 "name": "BaseBdev1", 00:20:23.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.808 "is_configured": false, 00:20:23.808 "data_offset": 0, 00:20:23.808 "data_size": 0 00:20:23.808 }, 00:20:23.808 { 00:20:23.808 "name": null, 00:20:23.808 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:23.808 "is_configured": false, 00:20:23.808 "data_offset": 2048, 00:20:23.808 "data_size": 63488 00:20:23.808 }, 00:20:23.808 { 00:20:23.808 "name": "BaseBdev3", 00:20:23.808 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:23.808 "is_configured": true, 00:20:23.808 "data_offset": 2048, 00:20:23.808 "data_size": 63488 00:20:23.808 }, 00:20:23.808 { 00:20:23.808 "name": "BaseBdev4", 00:20:23.808 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:23.808 "is_configured": true, 00:20:23.808 "data_offset": 2048, 00:20:23.808 "data_size": 63488 00:20:23.808 } 00:20:23.808 ] 00:20:23.808 }' 00:20:23.808 15:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.808 15:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.067 15:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:24.067 15:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.326 15:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:24.326 15:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:24.584 [2024-07-23 15:14:19.971532] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.584 BaseBdev1 00:20:24.584 15:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:24.584 15:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:24.584 15:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:24.584 15:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:24.584 15:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:24.584 15:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:24.584 15:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:24.842 15:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:25.101 [ 00:20:25.101 { 00:20:25.101 "name": "BaseBdev1", 00:20:25.101 "aliases": [ 00:20:25.101 "82b5c69d-642a-4a7e-84a2-154e9534bb5b" 00:20:25.101 ], 00:20:25.101 "product_name": "Malloc disk", 00:20:25.101 "block_size": 512, 00:20:25.101 "num_blocks": 65536, 00:20:25.101 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:25.101 "assigned_rate_limits": { 00:20:25.101 "rw_ios_per_sec": 0, 00:20:25.101 "rw_mbytes_per_sec": 0, 00:20:25.101 "r_mbytes_per_sec": 0, 00:20:25.101 "w_mbytes_per_sec": 0 00:20:25.101 }, 00:20:25.101 "claimed": true, 00:20:25.101 "claim_type": "exclusive_write", 00:20:25.101 "zoned": false, 00:20:25.101 "supported_io_types": { 00:20:25.101 "read": true, 00:20:25.101 "write": true, 00:20:25.101 "unmap": true, 00:20:25.101 "flush": true, 00:20:25.101 "reset": true, 00:20:25.101 "nvme_admin": false, 00:20:25.101 "nvme_io": false, 00:20:25.101 "nvme_io_md": false, 00:20:25.101 "write_zeroes": true, 00:20:25.101 "zcopy": true, 00:20:25.101 "get_zone_info": false, 00:20:25.101 "zone_management": false, 00:20:25.101 "zone_append": false, 00:20:25.101 "compare": false, 00:20:25.101 "compare_and_write": false, 00:20:25.101 "abort": true, 00:20:25.101 "seek_hole": false, 00:20:25.101 "seek_data": false, 00:20:25.101 "copy": true, 00:20:25.101 "nvme_iov_md": false 00:20:25.101 }, 00:20:25.101 "memory_domains": [ 00:20:25.101 { 00:20:25.101 "dma_device_id": "system", 00:20:25.101 "dma_device_type": 1 00:20:25.101 }, 00:20:25.101 { 00:20:25.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.101 "dma_device_type": 2 00:20:25.101 } 00:20:25.101 ], 00:20:25.101 "driver_specific": {} 00:20:25.101 } 00:20:25.101 ] 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.101 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.359 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.359 "name": "Existed_Raid", 00:20:25.359 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:25.359 "strip_size_kb": 64, 00:20:25.359 "state": "configuring", 00:20:25.359 "raid_level": "raid0", 00:20:25.359 "superblock": true, 00:20:25.359 "num_base_bdevs": 4, 00:20:25.359 "num_base_bdevs_discovered": 3, 00:20:25.359 "num_base_bdevs_operational": 4, 00:20:25.359 "base_bdevs_list": [ 00:20:25.359 { 00:20:25.359 "name": "BaseBdev1", 00:20:25.359 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:25.359 "is_configured": true, 00:20:25.359 "data_offset": 2048, 00:20:25.359 "data_size": 63488 00:20:25.359 }, 00:20:25.359 { 00:20:25.359 "name": null, 00:20:25.359 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:25.359 "is_configured": false, 00:20:25.359 "data_offset": 2048, 00:20:25.359 "data_size": 63488 00:20:25.359 }, 00:20:25.359 { 00:20:25.359 "name": "BaseBdev3", 00:20:25.359 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:25.359 "is_configured": true, 00:20:25.359 "data_offset": 2048, 00:20:25.359 "data_size": 63488 00:20:25.359 }, 00:20:25.359 { 00:20:25.359 "name": "BaseBdev4", 00:20:25.359 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:25.359 "is_configured": true, 00:20:25.359 "data_offset": 2048, 00:20:25.359 "data_size": 63488 00:20:25.359 } 00:20:25.359 ] 00:20:25.359 }' 00:20:25.359 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.359 15:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.618 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:25.618 15:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.618 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:25.618 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:25.876 [2024-07-23 15:14:21.175922] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.876 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.135 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.135 "name": "Existed_Raid", 00:20:26.135 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:26.135 "strip_size_kb": 64, 00:20:26.135 "state": "configuring", 00:20:26.135 "raid_level": "raid0", 00:20:26.135 "superblock": true, 00:20:26.135 "num_base_bdevs": 4, 00:20:26.135 "num_base_bdevs_discovered": 2, 00:20:26.135 "num_base_bdevs_operational": 4, 00:20:26.135 "base_bdevs_list": [ 00:20:26.135 { 00:20:26.135 "name": "BaseBdev1", 00:20:26.135 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:26.135 "is_configured": true, 00:20:26.135 "data_offset": 2048, 00:20:26.135 "data_size": 63488 00:20:26.135 }, 00:20:26.135 { 00:20:26.135 "name": null, 00:20:26.135 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:26.135 "is_configured": false, 00:20:26.135 "data_offset": 2048, 00:20:26.135 "data_size": 63488 00:20:26.135 }, 00:20:26.135 { 00:20:26.135 "name": null, 00:20:26.135 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:26.135 "is_configured": false, 00:20:26.135 "data_offset": 2048, 00:20:26.135 "data_size": 63488 00:20:26.135 }, 00:20:26.135 { 00:20:26.135 "name": "BaseBdev4", 00:20:26.135 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:26.135 "is_configured": true, 00:20:26.135 "data_offset": 2048, 00:20:26.135 "data_size": 63488 00:20:26.135 } 00:20:26.135 ] 00:20:26.135 }' 00:20:26.135 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.135 15:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.393 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.393 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:26.651 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:26.651 15:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:26.651 [2024-07-23 15:14:22.064187] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:26.651 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:26.651 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:26.651 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:26.651 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.910 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.910 "name": "Existed_Raid", 00:20:26.910 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:26.910 "strip_size_kb": 64, 00:20:26.910 "state": "configuring", 00:20:26.910 "raid_level": "raid0", 00:20:26.910 "superblock": true, 00:20:26.910 "num_base_bdevs": 4, 00:20:26.910 "num_base_bdevs_discovered": 3, 00:20:26.911 "num_base_bdevs_operational": 4, 00:20:26.911 "base_bdevs_list": [ 00:20:26.911 { 00:20:26.911 "name": "BaseBdev1", 00:20:26.911 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:26.911 "is_configured": true, 00:20:26.911 "data_offset": 2048, 00:20:26.911 "data_size": 63488 00:20:26.911 }, 00:20:26.911 { 00:20:26.911 "name": null, 00:20:26.911 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:26.911 "is_configured": false, 00:20:26.911 "data_offset": 2048, 00:20:26.911 "data_size": 63488 00:20:26.911 }, 00:20:26.911 { 00:20:26.911 "name": "BaseBdev3", 00:20:26.911 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:26.911 "is_configured": true, 00:20:26.911 "data_offset": 2048, 00:20:26.911 "data_size": 63488 00:20:26.911 }, 00:20:26.911 { 00:20:26.911 "name": "BaseBdev4", 00:20:26.911 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:26.911 "is_configured": true, 00:20:26.911 "data_offset": 2048, 00:20:26.911 "data_size": 63488 00:20:26.911 } 00:20:26.911 ] 00:20:26.911 }' 00:20:26.911 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.911 15:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.175 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.175 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:27.444 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:27.444 15:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:27.701 [2024-07-23 15:14:23.012449] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.701 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.959 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.959 "name": "Existed_Raid", 00:20:27.959 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:27.959 "strip_size_kb": 64, 00:20:27.959 "state": "configuring", 00:20:27.959 "raid_level": "raid0", 00:20:27.959 "superblock": true, 00:20:27.959 "num_base_bdevs": 4, 00:20:27.959 "num_base_bdevs_discovered": 2, 00:20:27.959 "num_base_bdevs_operational": 4, 00:20:27.959 "base_bdevs_list": [ 00:20:27.959 { 00:20:27.959 "name": null, 00:20:27.959 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:27.959 "is_configured": false, 00:20:27.959 "data_offset": 2048, 00:20:27.959 "data_size": 63488 00:20:27.959 }, 00:20:27.959 { 00:20:27.959 "name": null, 00:20:27.959 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:27.959 "is_configured": false, 00:20:27.959 "data_offset": 2048, 00:20:27.959 "data_size": 63488 00:20:27.959 }, 00:20:27.959 { 00:20:27.959 "name": "BaseBdev3", 00:20:27.959 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:27.959 "is_configured": true, 00:20:27.959 "data_offset": 2048, 00:20:27.959 "data_size": 63488 00:20:27.959 }, 00:20:27.959 { 00:20:27.959 "name": "BaseBdev4", 00:20:27.959 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:27.959 "is_configured": true, 00:20:27.959 "data_offset": 2048, 00:20:27.959 "data_size": 63488 00:20:27.959 } 00:20:27.959 ] 00:20:27.959 }' 00:20:27.959 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.959 15:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.217 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.217 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:28.475 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:28.475 15:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:28.734 [2024-07-23 15:14:24.053626] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.734 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.993 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:28.993 "name": "Existed_Raid", 00:20:28.993 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:28.993 "strip_size_kb": 64, 00:20:28.993 "state": "configuring", 00:20:28.993 "raid_level": "raid0", 00:20:28.993 "superblock": true, 00:20:28.993 "num_base_bdevs": 4, 00:20:28.993 "num_base_bdevs_discovered": 3, 00:20:28.993 "num_base_bdevs_operational": 4, 00:20:28.993 "base_bdevs_list": [ 00:20:28.993 { 00:20:28.993 "name": null, 00:20:28.993 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:28.993 "is_configured": false, 00:20:28.993 "data_offset": 2048, 00:20:28.993 "data_size": 63488 00:20:28.993 }, 00:20:28.993 { 00:20:28.993 "name": "BaseBdev2", 00:20:28.993 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:28.993 "is_configured": true, 00:20:28.993 "data_offset": 2048, 00:20:28.993 "data_size": 63488 00:20:28.993 }, 00:20:28.993 { 00:20:28.993 "name": "BaseBdev3", 00:20:28.993 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:28.993 "is_configured": true, 00:20:28.993 "data_offset": 2048, 00:20:28.993 "data_size": 63488 00:20:28.993 }, 00:20:28.993 { 00:20:28.993 "name": "BaseBdev4", 00:20:28.993 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:28.993 "is_configured": true, 00:20:28.993 "data_offset": 2048, 00:20:28.993 "data_size": 63488 00:20:28.993 } 00:20:28.993 ] 00:20:28.993 }' 00:20:28.993 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:28.993 15:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.251 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.251 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:29.510 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:29.510 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.510 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:29.510 15:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 82b5c69d-642a-4a7e-84a2-154e9534bb5b 00:20:29.769 [2024-07-23 15:14:25.057074] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:29.769 [2024-07-23 15:14:25.057263] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:20:29.769 [2024-07-23 15:14:25.057283] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:29.769 [2024-07-23 15:14:25.057362] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:20:29.769 [2024-07-23 15:14:25.057653] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:20:29.769 [2024-07-23 15:14:25.057669] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008180 00:20:29.769 [2024-07-23 15:14:25.057761] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.769 NewBaseBdev 00:20:29.769 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:29.769 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:29.769 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:29.769 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:29.769 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:29.769 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:29.769 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:30.028 [ 00:20:30.028 { 00:20:30.028 "name": "NewBaseBdev", 00:20:30.028 "aliases": [ 00:20:30.028 "82b5c69d-642a-4a7e-84a2-154e9534bb5b" 00:20:30.028 ], 00:20:30.028 "product_name": "Malloc disk", 00:20:30.028 "block_size": 512, 00:20:30.028 "num_blocks": 65536, 00:20:30.028 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:30.028 "assigned_rate_limits": { 00:20:30.028 "rw_ios_per_sec": 0, 00:20:30.028 "rw_mbytes_per_sec": 0, 00:20:30.028 "r_mbytes_per_sec": 0, 00:20:30.028 "w_mbytes_per_sec": 0 00:20:30.028 }, 00:20:30.028 "claimed": true, 00:20:30.028 "claim_type": "exclusive_write", 00:20:30.028 "zoned": false, 00:20:30.028 "supported_io_types": { 00:20:30.028 "read": true, 00:20:30.028 "write": true, 00:20:30.028 "unmap": true, 00:20:30.028 "flush": true, 00:20:30.028 "reset": true, 00:20:30.028 "nvme_admin": false, 00:20:30.028 "nvme_io": false, 00:20:30.028 "nvme_io_md": false, 00:20:30.028 "write_zeroes": true, 00:20:30.028 "zcopy": true, 00:20:30.028 "get_zone_info": false, 00:20:30.028 "zone_management": false, 00:20:30.028 "zone_append": false, 00:20:30.028 "compare": false, 00:20:30.028 "compare_and_write": false, 00:20:30.028 "abort": true, 00:20:30.028 "seek_hole": false, 00:20:30.028 "seek_data": false, 00:20:30.028 "copy": true, 00:20:30.028 "nvme_iov_md": false 00:20:30.028 }, 00:20:30.028 "memory_domains": [ 00:20:30.028 { 00:20:30.028 "dma_device_id": "system", 00:20:30.028 "dma_device_type": 1 00:20:30.028 }, 00:20:30.028 { 00:20:30.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.028 "dma_device_type": 2 00:20:30.028 } 00:20:30.028 ], 00:20:30.028 "driver_specific": {} 00:20:30.028 } 00:20:30.028 ] 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.028 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.287 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.287 "name": "Existed_Raid", 00:20:30.287 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:30.287 "strip_size_kb": 64, 00:20:30.287 "state": "online", 00:20:30.287 "raid_level": "raid0", 00:20:30.287 "superblock": true, 00:20:30.287 "num_base_bdevs": 4, 00:20:30.287 "num_base_bdevs_discovered": 4, 00:20:30.287 "num_base_bdevs_operational": 4, 00:20:30.287 "base_bdevs_list": [ 00:20:30.287 { 00:20:30.287 "name": "NewBaseBdev", 00:20:30.287 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:30.287 "is_configured": true, 00:20:30.287 "data_offset": 2048, 00:20:30.287 "data_size": 63488 00:20:30.287 }, 00:20:30.287 { 00:20:30.287 "name": "BaseBdev2", 00:20:30.287 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:30.287 "is_configured": true, 00:20:30.287 "data_offset": 2048, 00:20:30.287 "data_size": 63488 00:20:30.287 }, 00:20:30.287 { 00:20:30.287 "name": "BaseBdev3", 00:20:30.287 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:30.287 "is_configured": true, 00:20:30.287 "data_offset": 2048, 00:20:30.287 "data_size": 63488 00:20:30.287 }, 00:20:30.287 { 00:20:30.287 "name": "BaseBdev4", 00:20:30.287 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:30.287 "is_configured": true, 00:20:30.287 "data_offset": 2048, 00:20:30.287 "data_size": 63488 00:20:30.287 } 00:20:30.287 ] 00:20:30.287 }' 00:20:30.287 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.287 15:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.546 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:30.546 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:30.546 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:30.546 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:30.546 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:30.546 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:30.546 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:30.546 15:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:30.804 [2024-07-23 15:14:26.117776] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.804 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:30.804 "name": "Existed_Raid", 00:20:30.804 "aliases": [ 00:20:30.804 "048730eb-015c-4674-a116-b9340de24a7d" 00:20:30.804 ], 00:20:30.804 "product_name": "Raid Volume", 00:20:30.804 "block_size": 512, 00:20:30.804 "num_blocks": 253952, 00:20:30.804 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:30.804 "assigned_rate_limits": { 00:20:30.804 "rw_ios_per_sec": 0, 00:20:30.804 "rw_mbytes_per_sec": 0, 00:20:30.804 "r_mbytes_per_sec": 0, 00:20:30.804 "w_mbytes_per_sec": 0 00:20:30.804 }, 00:20:30.804 "claimed": false, 00:20:30.804 "zoned": false, 00:20:30.804 "supported_io_types": { 00:20:30.804 "read": true, 00:20:30.804 "write": true, 00:20:30.804 "unmap": true, 00:20:30.804 "flush": true, 00:20:30.804 "reset": true, 00:20:30.804 "nvme_admin": false, 00:20:30.804 "nvme_io": false, 00:20:30.804 "nvme_io_md": false, 00:20:30.804 "write_zeroes": true, 00:20:30.804 "zcopy": false, 00:20:30.804 "get_zone_info": false, 00:20:30.804 "zone_management": false, 00:20:30.804 "zone_append": false, 00:20:30.804 "compare": false, 00:20:30.804 "compare_and_write": false, 00:20:30.804 "abort": false, 00:20:30.804 "seek_hole": false, 00:20:30.804 "seek_data": false, 00:20:30.804 "copy": false, 00:20:30.804 "nvme_iov_md": false 00:20:30.804 }, 00:20:30.804 "memory_domains": [ 00:20:30.804 { 00:20:30.804 "dma_device_id": "system", 00:20:30.804 "dma_device_type": 1 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.804 "dma_device_type": 2 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "dma_device_id": "system", 00:20:30.804 "dma_device_type": 1 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.804 "dma_device_type": 2 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "dma_device_id": "system", 00:20:30.804 "dma_device_type": 1 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.804 "dma_device_type": 2 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "dma_device_id": "system", 00:20:30.804 "dma_device_type": 1 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.804 "dma_device_type": 2 00:20:30.804 } 00:20:30.804 ], 00:20:30.804 "driver_specific": { 00:20:30.804 "raid": { 00:20:30.804 "uuid": "048730eb-015c-4674-a116-b9340de24a7d", 00:20:30.804 "strip_size_kb": 64, 00:20:30.804 "state": "online", 00:20:30.804 "raid_level": "raid0", 00:20:30.804 "superblock": true, 00:20:30.804 "num_base_bdevs": 4, 00:20:30.804 "num_base_bdevs_discovered": 4, 00:20:30.804 "num_base_bdevs_operational": 4, 00:20:30.804 "base_bdevs_list": [ 00:20:30.804 { 00:20:30.804 "name": "NewBaseBdev", 00:20:30.804 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:30.804 "is_configured": true, 00:20:30.804 "data_offset": 2048, 00:20:30.804 "data_size": 63488 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "name": "BaseBdev2", 00:20:30.804 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:30.804 "is_configured": true, 00:20:30.804 "data_offset": 2048, 00:20:30.804 "data_size": 63488 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "name": "BaseBdev3", 00:20:30.804 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:30.804 "is_configured": true, 00:20:30.804 "data_offset": 2048, 00:20:30.804 "data_size": 63488 00:20:30.804 }, 00:20:30.804 { 00:20:30.804 "name": "BaseBdev4", 00:20:30.804 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:30.804 "is_configured": true, 00:20:30.804 "data_offset": 2048, 00:20:30.804 "data_size": 63488 00:20:30.804 } 00:20:30.804 ] 00:20:30.804 } 00:20:30.804 } 00:20:30.804 }' 00:20:30.804 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:30.804 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:30.804 BaseBdev2 00:20:30.804 BaseBdev3 00:20:30.804 BaseBdev4' 00:20:30.804 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:30.804 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:30.804 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.063 "name": "NewBaseBdev", 00:20:31.063 "aliases": [ 00:20:31.063 "82b5c69d-642a-4a7e-84a2-154e9534bb5b" 00:20:31.063 ], 00:20:31.063 "product_name": "Malloc disk", 00:20:31.063 "block_size": 512, 00:20:31.063 "num_blocks": 65536, 00:20:31.063 "uuid": "82b5c69d-642a-4a7e-84a2-154e9534bb5b", 00:20:31.063 "assigned_rate_limits": { 00:20:31.063 "rw_ios_per_sec": 0, 00:20:31.063 "rw_mbytes_per_sec": 0, 00:20:31.063 "r_mbytes_per_sec": 0, 00:20:31.063 "w_mbytes_per_sec": 0 00:20:31.063 }, 00:20:31.063 "claimed": true, 00:20:31.063 "claim_type": "exclusive_write", 00:20:31.063 "zoned": false, 00:20:31.063 "supported_io_types": { 00:20:31.063 "read": true, 00:20:31.063 "write": true, 00:20:31.063 "unmap": true, 00:20:31.063 "flush": true, 00:20:31.063 "reset": true, 00:20:31.063 "nvme_admin": false, 00:20:31.063 "nvme_io": false, 00:20:31.063 "nvme_io_md": false, 00:20:31.063 "write_zeroes": true, 00:20:31.063 "zcopy": true, 00:20:31.063 "get_zone_info": false, 00:20:31.063 "zone_management": false, 00:20:31.063 "zone_append": false, 00:20:31.063 "compare": false, 00:20:31.063 "compare_and_write": false, 00:20:31.063 "abort": true, 00:20:31.063 "seek_hole": false, 00:20:31.063 "seek_data": false, 00:20:31.063 "copy": true, 00:20:31.063 "nvme_iov_md": false 00:20:31.063 }, 00:20:31.063 "memory_domains": [ 00:20:31.063 { 00:20:31.063 "dma_device_id": "system", 00:20:31.063 "dma_device_type": 1 00:20:31.063 }, 00:20:31.063 { 00:20:31.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.063 "dma_device_type": 2 00:20:31.063 } 00:20:31.063 ], 00:20:31.063 "driver_specific": {} 00:20:31.063 }' 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.063 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.321 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.321 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.321 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.321 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:31.321 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.579 "name": "BaseBdev2", 00:20:31.579 "aliases": [ 00:20:31.579 "557272bc-8482-4dfd-837e-e893f32fefa9" 00:20:31.579 ], 00:20:31.579 "product_name": "Malloc disk", 00:20:31.579 "block_size": 512, 00:20:31.579 "num_blocks": 65536, 00:20:31.579 "uuid": "557272bc-8482-4dfd-837e-e893f32fefa9", 00:20:31.579 "assigned_rate_limits": { 00:20:31.579 "rw_ios_per_sec": 0, 00:20:31.579 "rw_mbytes_per_sec": 0, 00:20:31.579 "r_mbytes_per_sec": 0, 00:20:31.579 "w_mbytes_per_sec": 0 00:20:31.579 }, 00:20:31.579 "claimed": true, 00:20:31.579 "claim_type": "exclusive_write", 00:20:31.579 "zoned": false, 00:20:31.579 "supported_io_types": { 00:20:31.579 "read": true, 00:20:31.579 "write": true, 00:20:31.579 "unmap": true, 00:20:31.579 "flush": true, 00:20:31.579 "reset": true, 00:20:31.579 "nvme_admin": false, 00:20:31.579 "nvme_io": false, 00:20:31.579 "nvme_io_md": false, 00:20:31.579 "write_zeroes": true, 00:20:31.579 "zcopy": true, 00:20:31.579 "get_zone_info": false, 00:20:31.579 "zone_management": false, 00:20:31.579 "zone_append": false, 00:20:31.579 "compare": false, 00:20:31.579 "compare_and_write": false, 00:20:31.579 "abort": true, 00:20:31.579 "seek_hole": false, 00:20:31.579 "seek_data": false, 00:20:31.579 "copy": true, 00:20:31.579 "nvme_iov_md": false 00:20:31.579 }, 00:20:31.579 "memory_domains": [ 00:20:31.579 { 00:20:31.579 "dma_device_id": "system", 00:20:31.579 "dma_device_type": 1 00:20:31.579 }, 00:20:31.579 { 00:20:31.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.579 "dma_device_type": 2 00:20:31.579 } 00:20:31.579 ], 00:20:31.579 "driver_specific": {} 00:20:31.579 }' 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.579 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.580 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.580 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.580 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.580 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.580 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.580 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:31.580 15:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.837 "name": "BaseBdev3", 00:20:31.837 "aliases": [ 00:20:31.837 "1baa2d4f-d548-4692-ae0b-69c1d92a0674" 00:20:31.837 ], 00:20:31.837 "product_name": "Malloc disk", 00:20:31.837 "block_size": 512, 00:20:31.837 "num_blocks": 65536, 00:20:31.837 "uuid": "1baa2d4f-d548-4692-ae0b-69c1d92a0674", 00:20:31.837 "assigned_rate_limits": { 00:20:31.837 "rw_ios_per_sec": 0, 00:20:31.837 "rw_mbytes_per_sec": 0, 00:20:31.837 "r_mbytes_per_sec": 0, 00:20:31.837 "w_mbytes_per_sec": 0 00:20:31.837 }, 00:20:31.837 "claimed": true, 00:20:31.837 "claim_type": "exclusive_write", 00:20:31.837 "zoned": false, 00:20:31.837 "supported_io_types": { 00:20:31.837 "read": true, 00:20:31.837 "write": true, 00:20:31.837 "unmap": true, 00:20:31.837 "flush": true, 00:20:31.837 "reset": true, 00:20:31.837 "nvme_admin": false, 00:20:31.837 "nvme_io": false, 00:20:31.837 "nvme_io_md": false, 00:20:31.837 "write_zeroes": true, 00:20:31.837 "zcopy": true, 00:20:31.837 "get_zone_info": false, 00:20:31.837 "zone_management": false, 00:20:31.837 "zone_append": false, 00:20:31.837 "compare": false, 00:20:31.837 "compare_and_write": false, 00:20:31.837 "abort": true, 00:20:31.837 "seek_hole": false, 00:20:31.837 "seek_data": false, 00:20:31.837 "copy": true, 00:20:31.837 "nvme_iov_md": false 00:20:31.837 }, 00:20:31.837 "memory_domains": [ 00:20:31.837 { 00:20:31.837 "dma_device_id": "system", 00:20:31.837 "dma_device_type": 1 00:20:31.837 }, 00:20:31.837 { 00:20:31.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.837 "dma_device_type": 2 00:20:31.837 } 00:20:31.837 ], 00:20:31.837 "driver_specific": {} 00:20:31.837 }' 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:31.837 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.096 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.096 "name": "BaseBdev4", 00:20:32.096 "aliases": [ 00:20:32.096 "ea23a379-eef8-43d0-8c54-edb9fe6a7e38" 00:20:32.096 ], 00:20:32.096 "product_name": "Malloc disk", 00:20:32.096 "block_size": 512, 00:20:32.096 "num_blocks": 65536, 00:20:32.096 "uuid": "ea23a379-eef8-43d0-8c54-edb9fe6a7e38", 00:20:32.096 "assigned_rate_limits": { 00:20:32.096 "rw_ios_per_sec": 0, 00:20:32.096 "rw_mbytes_per_sec": 0, 00:20:32.096 "r_mbytes_per_sec": 0, 00:20:32.096 "w_mbytes_per_sec": 0 00:20:32.096 }, 00:20:32.096 "claimed": true, 00:20:32.096 "claim_type": "exclusive_write", 00:20:32.096 "zoned": false, 00:20:32.096 "supported_io_types": { 00:20:32.096 "read": true, 00:20:32.096 "write": true, 00:20:32.096 "unmap": true, 00:20:32.096 "flush": true, 00:20:32.096 "reset": true, 00:20:32.096 "nvme_admin": false, 00:20:32.096 "nvme_io": false, 00:20:32.096 "nvme_io_md": false, 00:20:32.096 "write_zeroes": true, 00:20:32.096 "zcopy": true, 00:20:32.096 "get_zone_info": false, 00:20:32.096 "zone_management": false, 00:20:32.096 "zone_append": false, 00:20:32.096 "compare": false, 00:20:32.096 "compare_and_write": false, 00:20:32.096 "abort": true, 00:20:32.096 "seek_hole": false, 00:20:32.096 "seek_data": false, 00:20:32.096 "copy": true, 00:20:32.096 "nvme_iov_md": false 00:20:32.096 }, 00:20:32.096 "memory_domains": [ 00:20:32.096 { 00:20:32.096 "dma_device_id": "system", 00:20:32.096 "dma_device_type": 1 00:20:32.096 }, 00:20:32.096 { 00:20:32.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.096 "dma_device_type": 2 00:20:32.096 } 00:20:32.096 ], 00:20:32.096 "driver_specific": {} 00:20:32.096 }' 00:20:32.096 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.354 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:32.613 [2024-07-23 15:14:27.857772] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:32.613 [2024-07-23 15:14:27.857829] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.613 [2024-07-23 15:14:27.857918] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.613 [2024-07-23 15:14:27.857986] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.613 [2024-07-23 15:14:27.858006] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name Existed_Raid, state offline 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 99820 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 99820 ']' 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 99820 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99820 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.613 killing process with pid 99820 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99820' 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 99820 00:20:32.613 [2024-07-23 15:14:27.921869] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.613 15:14:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 99820 00:20:32.613 [2024-07-23 15:14:27.968290] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.872 ************************************ 00:20:32.872 END TEST raid_state_function_test_sb 00:20:32.872 ************************************ 00:20:32.872 15:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:32.872 00:20:32.872 real 0m23.703s 00:20:32.872 user 0m41.338s 00:20:32.872 sys 0m5.141s 00:20:32.872 15:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.872 15:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 15:14:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:32.872 15:14:28 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:20:32.872 15:14:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:32.872 15:14:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.872 15:14:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 ************************************ 00:20:32.872 START TEST raid_superblock_test 00:20:32.872 ************************************ 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=100780 00:20:32.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 100780 /var/tmp/spdk-raid.sock 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 100780 ']' 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.872 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.130 [2024-07-23 15:14:28.369572] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:20:33.131 [2024-07-23 15:14:28.369892] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100780 ] 00:20:33.131 [2024-07-23 15:14:28.530338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.390 [2024-07-23 15:14:28.578002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.390 [2024-07-23 15:14:28.622403] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:33.390 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:33.648 malloc1 00:20:33.648 15:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:33.908 [2024-07-23 15:14:29.181124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:33.908 [2024-07-23 15:14:29.181418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.908 [2024-07-23 15:14:29.181456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:20:33.908 [2024-07-23 15:14:29.181482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.908 [2024-07-23 15:14:29.183995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.908 [2024-07-23 15:14:29.184047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:33.908 pt1 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:33.908 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:34.197 malloc2 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:34.197 [2024-07-23 15:14:29.607076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:34.197 [2024-07-23 15:14:29.607369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.197 [2024-07-23 15:14:29.607428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:20:34.197 [2024-07-23 15:14:29.607589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.197 [2024-07-23 15:14:29.610106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.197 [2024-07-23 15:14:29.610261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:34.197 pt2 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.197 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.455 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:34.455 malloc3 00:20:34.455 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:34.714 [2024-07-23 15:14:29.978412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:34.714 [2024-07-23 15:14:29.978496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.714 [2024-07-23 15:14:29.978521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:20:34.714 [2024-07-23 15:14:29.978536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.714 [2024-07-23 15:14:29.980985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.714 [2024-07-23 15:14:29.981031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:34.714 pt3 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.714 15:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:34.972 malloc4 00:20:34.972 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:34.972 [2024-07-23 15:14:30.355763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:34.972 [2024-07-23 15:14:30.356012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.972 [2024-07-23 15:14:30.356072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:20:34.972 [2024-07-23 15:14:30.356150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.972 [2024-07-23 15:14:30.358622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.972 [2024-07-23 15:14:30.358768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:34.972 pt4 00:20:34.973 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:34.973 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:34.973 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:35.231 [2024-07-23 15:14:30.535877] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:35.231 [2024-07-23 15:14:30.538135] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:35.231 [2024-07-23 15:14:30.538197] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:35.231 [2024-07-23 15:14:30.538250] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:35.231 [2024-07-23 15:14:30.538431] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:20:35.231 [2024-07-23 15:14:30.538449] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:35.231 [2024-07-23 15:14:30.538571] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:20:35.231 [2024-07-23 15:14:30.538935] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:20:35.231 [2024-07-23 15:14:30.538953] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:20:35.231 [2024-07-23 15:14:30.539083] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.231 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.489 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.489 "name": "raid_bdev1", 00:20:35.489 "uuid": "4c35c99f-225f-427e-91ef-7124c8db54cc", 00:20:35.489 "strip_size_kb": 64, 00:20:35.489 "state": "online", 00:20:35.489 "raid_level": "raid0", 00:20:35.489 "superblock": true, 00:20:35.489 "num_base_bdevs": 4, 00:20:35.489 "num_base_bdevs_discovered": 4, 00:20:35.489 "num_base_bdevs_operational": 4, 00:20:35.489 "base_bdevs_list": [ 00:20:35.489 { 00:20:35.489 "name": "pt1", 00:20:35.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.489 "is_configured": true, 00:20:35.489 "data_offset": 2048, 00:20:35.489 "data_size": 63488 00:20:35.489 }, 00:20:35.489 { 00:20:35.489 "name": "pt2", 00:20:35.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.489 "is_configured": true, 00:20:35.489 "data_offset": 2048, 00:20:35.489 "data_size": 63488 00:20:35.489 }, 00:20:35.489 { 00:20:35.489 "name": "pt3", 00:20:35.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.489 "is_configured": true, 00:20:35.489 "data_offset": 2048, 00:20:35.489 "data_size": 63488 00:20:35.489 }, 00:20:35.489 { 00:20:35.489 "name": "pt4", 00:20:35.489 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:35.489 "is_configured": true, 00:20:35.489 "data_offset": 2048, 00:20:35.489 "data_size": 63488 00:20:35.489 } 00:20:35.489 ] 00:20:35.489 }' 00:20:35.489 15:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.489 15:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.747 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:35.747 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:35.747 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:35.747 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:35.747 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:35.747 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:35.747 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:35.747 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:36.005 [2024-07-23 15:14:31.256224] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.005 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:36.005 "name": "raid_bdev1", 00:20:36.005 "aliases": [ 00:20:36.005 "4c35c99f-225f-427e-91ef-7124c8db54cc" 00:20:36.005 ], 00:20:36.005 "product_name": "Raid Volume", 00:20:36.005 "block_size": 512, 00:20:36.005 "num_blocks": 253952, 00:20:36.005 "uuid": "4c35c99f-225f-427e-91ef-7124c8db54cc", 00:20:36.005 "assigned_rate_limits": { 00:20:36.005 "rw_ios_per_sec": 0, 00:20:36.005 "rw_mbytes_per_sec": 0, 00:20:36.005 "r_mbytes_per_sec": 0, 00:20:36.005 "w_mbytes_per_sec": 0 00:20:36.005 }, 00:20:36.005 "claimed": false, 00:20:36.005 "zoned": false, 00:20:36.005 "supported_io_types": { 00:20:36.005 "read": true, 00:20:36.005 "write": true, 00:20:36.005 "unmap": true, 00:20:36.005 "flush": true, 00:20:36.005 "reset": true, 00:20:36.005 "nvme_admin": false, 00:20:36.005 "nvme_io": false, 00:20:36.005 "nvme_io_md": false, 00:20:36.005 "write_zeroes": true, 00:20:36.005 "zcopy": false, 00:20:36.005 "get_zone_info": false, 00:20:36.005 "zone_management": false, 00:20:36.005 "zone_append": false, 00:20:36.005 "compare": false, 00:20:36.005 "compare_and_write": false, 00:20:36.005 "abort": false, 00:20:36.005 "seek_hole": false, 00:20:36.005 "seek_data": false, 00:20:36.005 "copy": false, 00:20:36.005 "nvme_iov_md": false 00:20:36.005 }, 00:20:36.005 "memory_domains": [ 00:20:36.005 { 00:20:36.005 "dma_device_id": "system", 00:20:36.005 "dma_device_type": 1 00:20:36.005 }, 00:20:36.005 { 00:20:36.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.005 "dma_device_type": 2 00:20:36.005 }, 00:20:36.005 { 00:20:36.006 "dma_device_id": "system", 00:20:36.006 "dma_device_type": 1 00:20:36.006 }, 00:20:36.006 { 00:20:36.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.006 "dma_device_type": 2 00:20:36.006 }, 00:20:36.006 { 00:20:36.006 "dma_device_id": "system", 00:20:36.006 "dma_device_type": 1 00:20:36.006 }, 00:20:36.006 { 00:20:36.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.006 "dma_device_type": 2 00:20:36.006 }, 00:20:36.006 { 00:20:36.006 "dma_device_id": "system", 00:20:36.006 "dma_device_type": 1 00:20:36.006 }, 00:20:36.006 { 00:20:36.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.006 "dma_device_type": 2 00:20:36.006 } 00:20:36.006 ], 00:20:36.006 "driver_specific": { 00:20:36.006 "raid": { 00:20:36.006 "uuid": "4c35c99f-225f-427e-91ef-7124c8db54cc", 00:20:36.006 "strip_size_kb": 64, 00:20:36.006 "state": "online", 00:20:36.006 "raid_level": "raid0", 00:20:36.006 "superblock": true, 00:20:36.006 "num_base_bdevs": 4, 00:20:36.006 "num_base_bdevs_discovered": 4, 00:20:36.006 "num_base_bdevs_operational": 4, 00:20:36.006 "base_bdevs_list": [ 00:20:36.006 { 00:20:36.006 "name": "pt1", 00:20:36.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.006 "is_configured": true, 00:20:36.006 "data_offset": 2048, 00:20:36.006 "data_size": 63488 00:20:36.006 }, 00:20:36.006 { 00:20:36.006 "name": "pt2", 00:20:36.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.006 "is_configured": true, 00:20:36.006 "data_offset": 2048, 00:20:36.006 "data_size": 63488 00:20:36.006 }, 00:20:36.006 { 00:20:36.006 "name": "pt3", 00:20:36.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.006 "is_configured": true, 00:20:36.006 "data_offset": 2048, 00:20:36.006 "data_size": 63488 00:20:36.006 }, 00:20:36.006 { 00:20:36.006 "name": "pt4", 00:20:36.006 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:36.006 "is_configured": true, 00:20:36.006 "data_offset": 2048, 00:20:36.006 "data_size": 63488 00:20:36.006 } 00:20:36.006 ] 00:20:36.006 } 00:20:36.006 } 00:20:36.006 }' 00:20:36.006 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:36.006 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:36.006 pt2 00:20:36.006 pt3 00:20:36.006 pt4' 00:20:36.006 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.006 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:36.006 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.265 "name": "pt1", 00:20:36.265 "aliases": [ 00:20:36.265 "00000000-0000-0000-0000-000000000001" 00:20:36.265 ], 00:20:36.265 "product_name": "passthru", 00:20:36.265 "block_size": 512, 00:20:36.265 "num_blocks": 65536, 00:20:36.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.265 "assigned_rate_limits": { 00:20:36.265 "rw_ios_per_sec": 0, 00:20:36.265 "rw_mbytes_per_sec": 0, 00:20:36.265 "r_mbytes_per_sec": 0, 00:20:36.265 "w_mbytes_per_sec": 0 00:20:36.265 }, 00:20:36.265 "claimed": true, 00:20:36.265 "claim_type": "exclusive_write", 00:20:36.265 "zoned": false, 00:20:36.265 "supported_io_types": { 00:20:36.265 "read": true, 00:20:36.265 "write": true, 00:20:36.265 "unmap": true, 00:20:36.265 "flush": true, 00:20:36.265 "reset": true, 00:20:36.265 "nvme_admin": false, 00:20:36.265 "nvme_io": false, 00:20:36.265 "nvme_io_md": false, 00:20:36.265 "write_zeroes": true, 00:20:36.265 "zcopy": true, 00:20:36.265 "get_zone_info": false, 00:20:36.265 "zone_management": false, 00:20:36.265 "zone_append": false, 00:20:36.265 "compare": false, 00:20:36.265 "compare_and_write": false, 00:20:36.265 "abort": true, 00:20:36.265 "seek_hole": false, 00:20:36.265 "seek_data": false, 00:20:36.265 "copy": true, 00:20:36.265 "nvme_iov_md": false 00:20:36.265 }, 00:20:36.265 "memory_domains": [ 00:20:36.265 { 00:20:36.265 "dma_device_id": "system", 00:20:36.265 "dma_device_type": 1 00:20:36.265 }, 00:20:36.265 { 00:20:36.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.265 "dma_device_type": 2 00:20:36.265 } 00:20:36.265 ], 00:20:36.265 "driver_specific": { 00:20:36.265 "passthru": { 00:20:36.265 "name": "pt1", 00:20:36.265 "base_bdev_name": "malloc1" 00:20:36.265 } 00:20:36.265 } 00:20:36.265 }' 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.265 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:36.523 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.523 "name": "pt2", 00:20:36.523 "aliases": [ 00:20:36.523 "00000000-0000-0000-0000-000000000002" 00:20:36.523 ], 00:20:36.523 "product_name": "passthru", 00:20:36.523 "block_size": 512, 00:20:36.523 "num_blocks": 65536, 00:20:36.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.523 "assigned_rate_limits": { 00:20:36.523 "rw_ios_per_sec": 0, 00:20:36.523 "rw_mbytes_per_sec": 0, 00:20:36.523 "r_mbytes_per_sec": 0, 00:20:36.523 "w_mbytes_per_sec": 0 00:20:36.523 }, 00:20:36.523 "claimed": true, 00:20:36.523 "claim_type": "exclusive_write", 00:20:36.523 "zoned": false, 00:20:36.523 "supported_io_types": { 00:20:36.523 "read": true, 00:20:36.523 "write": true, 00:20:36.523 "unmap": true, 00:20:36.523 "flush": true, 00:20:36.523 "reset": true, 00:20:36.524 "nvme_admin": false, 00:20:36.524 "nvme_io": false, 00:20:36.524 "nvme_io_md": false, 00:20:36.524 "write_zeroes": true, 00:20:36.524 "zcopy": true, 00:20:36.524 "get_zone_info": false, 00:20:36.524 "zone_management": false, 00:20:36.524 "zone_append": false, 00:20:36.524 "compare": false, 00:20:36.524 "compare_and_write": false, 00:20:36.524 "abort": true, 00:20:36.524 "seek_hole": false, 00:20:36.524 "seek_data": false, 00:20:36.524 "copy": true, 00:20:36.524 "nvme_iov_md": false 00:20:36.524 }, 00:20:36.524 "memory_domains": [ 00:20:36.524 { 00:20:36.524 "dma_device_id": "system", 00:20:36.524 "dma_device_type": 1 00:20:36.524 }, 00:20:36.524 { 00:20:36.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.524 "dma_device_type": 2 00:20:36.524 } 00:20:36.524 ], 00:20:36.524 "driver_specific": { 00:20:36.524 "passthru": { 00:20:36.524 "name": "pt2", 00:20:36.524 "base_bdev_name": "malloc2" 00:20:36.524 } 00:20:36.524 } 00:20:36.524 }' 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:36.524 15:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.782 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.782 "name": "pt3", 00:20:36.782 "aliases": [ 00:20:36.782 "00000000-0000-0000-0000-000000000003" 00:20:36.782 ], 00:20:36.782 "product_name": "passthru", 00:20:36.782 "block_size": 512, 00:20:36.782 "num_blocks": 65536, 00:20:36.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.782 "assigned_rate_limits": { 00:20:36.782 "rw_ios_per_sec": 0, 00:20:36.782 "rw_mbytes_per_sec": 0, 00:20:36.782 "r_mbytes_per_sec": 0, 00:20:36.782 "w_mbytes_per_sec": 0 00:20:36.782 }, 00:20:36.782 "claimed": true, 00:20:36.782 "claim_type": "exclusive_write", 00:20:36.782 "zoned": false, 00:20:36.782 "supported_io_types": { 00:20:36.782 "read": true, 00:20:36.782 "write": true, 00:20:36.782 "unmap": true, 00:20:36.782 "flush": true, 00:20:36.782 "reset": true, 00:20:36.782 "nvme_admin": false, 00:20:36.782 "nvme_io": false, 00:20:36.782 "nvme_io_md": false, 00:20:36.782 "write_zeroes": true, 00:20:36.782 "zcopy": true, 00:20:36.782 "get_zone_info": false, 00:20:36.782 "zone_management": false, 00:20:36.782 "zone_append": false, 00:20:36.782 "compare": false, 00:20:36.782 "compare_and_write": false, 00:20:36.782 "abort": true, 00:20:36.782 "seek_hole": false, 00:20:36.782 "seek_data": false, 00:20:36.782 "copy": true, 00:20:36.782 "nvme_iov_md": false 00:20:36.782 }, 00:20:36.782 "memory_domains": [ 00:20:36.782 { 00:20:36.782 "dma_device_id": "system", 00:20:36.782 "dma_device_type": 1 00:20:36.782 }, 00:20:36.782 { 00:20:36.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.782 "dma_device_type": 2 00:20:36.782 } 00:20:36.782 ], 00:20:36.782 "driver_specific": { 00:20:36.782 "passthru": { 00:20:36.782 "name": "pt3", 00:20:36.782 "base_bdev_name": "malloc3" 00:20:36.782 } 00:20:36.782 } 00:20:36.782 }' 00:20:36.783 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.783 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.783 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.783 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:37.041 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:37.300 "name": "pt4", 00:20:37.300 "aliases": [ 00:20:37.300 "00000000-0000-0000-0000-000000000004" 00:20:37.300 ], 00:20:37.300 "product_name": "passthru", 00:20:37.300 "block_size": 512, 00:20:37.300 "num_blocks": 65536, 00:20:37.300 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:37.300 "assigned_rate_limits": { 00:20:37.300 "rw_ios_per_sec": 0, 00:20:37.300 "rw_mbytes_per_sec": 0, 00:20:37.300 "r_mbytes_per_sec": 0, 00:20:37.300 "w_mbytes_per_sec": 0 00:20:37.300 }, 00:20:37.300 "claimed": true, 00:20:37.300 "claim_type": "exclusive_write", 00:20:37.300 "zoned": false, 00:20:37.300 "supported_io_types": { 00:20:37.300 "read": true, 00:20:37.300 "write": true, 00:20:37.300 "unmap": true, 00:20:37.300 "flush": true, 00:20:37.300 "reset": true, 00:20:37.300 "nvme_admin": false, 00:20:37.300 "nvme_io": false, 00:20:37.300 "nvme_io_md": false, 00:20:37.300 "write_zeroes": true, 00:20:37.300 "zcopy": true, 00:20:37.300 "get_zone_info": false, 00:20:37.300 "zone_management": false, 00:20:37.300 "zone_append": false, 00:20:37.300 "compare": false, 00:20:37.300 "compare_and_write": false, 00:20:37.300 "abort": true, 00:20:37.300 "seek_hole": false, 00:20:37.300 "seek_data": false, 00:20:37.300 "copy": true, 00:20:37.300 "nvme_iov_md": false 00:20:37.300 }, 00:20:37.300 "memory_domains": [ 00:20:37.300 { 00:20:37.300 "dma_device_id": "system", 00:20:37.300 "dma_device_type": 1 00:20:37.300 }, 00:20:37.300 { 00:20:37.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.300 "dma_device_type": 2 00:20:37.300 } 00:20:37.300 ], 00:20:37.300 "driver_specific": { 00:20:37.300 "passthru": { 00:20:37.300 "name": "pt4", 00:20:37.300 "base_bdev_name": "malloc4" 00:20:37.300 } 00:20:37.300 } 00:20:37.300 }' 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:37.300 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:37.559 [2024-07-23 15:14:32.896598] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.559 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4c35c99f-225f-427e-91ef-7124c8db54cc 00:20:37.559 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4c35c99f-225f-427e-91ef-7124c8db54cc ']' 00:20:37.559 15:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:37.817 [2024-07-23 15:14:33.092583] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.817 [2024-07-23 15:14:33.092726] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.817 [2024-07-23 15:14:33.093485] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.817 [2024-07-23 15:14:33.093752] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.817 [2024-07-23 15:14:33.093829] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:20:37.817 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:37.817 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.075 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:38.075 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:38.075 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.075 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:38.332 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.332 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:38.332 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.332 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:38.591 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.591 15:14:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:38.850 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:38.850 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:39.108 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:39.108 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:39.109 [2024-07-23 15:14:34.496729] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:39.109 [2024-07-23 15:14:34.499560] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:39.109 [2024-07-23 15:14:34.499808] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:39.109 [2024-07-23 15:14:34.499967] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:39.109 [2024-07-23 15:14:34.500079] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:39.109 [2024-07-23 15:14:34.500369] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:39.109 [2024-07-23 15:14:34.500536] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:39.109 request: 00:20:39.109 { 00:20:39.109 "name": "raid_bdev1", 00:20:39.109 "raid_level": "raid0", 00:20:39.109 "base_bdevs": [ 00:20:39.109 "malloc1", 00:20:39.109 "malloc2", 00:20:39.109 "malloc3", 00:20:39.109 "malloc4" 00:20:39.109 ], 00:20:39.109 "strip_size_kb": 64, 00:20:39.109 "superblock": false, 00:20:39.109 "method": "bdev_raid_create", 00:20:39.109 "req_id": 1 00:20:39.109 } 00:20:39.109 Got JSON-RPC error response 00:20:39.109 response: 00:20:39.109 { 00:20:39.109 "code": -17, 00:20:39.109 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:39.109 } 00:20:39.109 [2024-07-23 15:14:34.500712] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:39.109 [2024-07-23 15:14:34.500741] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.109 [2024-07-23 15:14:34.500755] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state configuring 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:39.109 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.368 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:39.368 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:39.368 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:39.627 [2024-07-23 15:14:34.869202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:39.627 [2024-07-23 15:14:34.869319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.627 [2024-07-23 15:14:34.869366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:20:39.627 [2024-07-23 15:14:34.869382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.627 [2024-07-23 15:14:34.872326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.627 pt1 00:20:39.627 [2024-07-23 15:14:34.872559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:39.627 [2024-07-23 15:14:34.872689] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:39.627 [2024-07-23 15:14:34.872755] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.627 15:14:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.886 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.886 "name": "raid_bdev1", 00:20:39.886 "uuid": "4c35c99f-225f-427e-91ef-7124c8db54cc", 00:20:39.886 "strip_size_kb": 64, 00:20:39.886 "state": "configuring", 00:20:39.886 "raid_level": "raid0", 00:20:39.886 "superblock": true, 00:20:39.886 "num_base_bdevs": 4, 00:20:39.886 "num_base_bdevs_discovered": 1, 00:20:39.886 "num_base_bdevs_operational": 4, 00:20:39.886 "base_bdevs_list": [ 00:20:39.886 { 00:20:39.886 "name": "pt1", 00:20:39.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:39.886 "is_configured": true, 00:20:39.886 "data_offset": 2048, 00:20:39.886 "data_size": 63488 00:20:39.886 }, 00:20:39.886 { 00:20:39.886 "name": null, 00:20:39.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.886 "is_configured": false, 00:20:39.886 "data_offset": 2048, 00:20:39.886 "data_size": 63488 00:20:39.886 }, 00:20:39.886 { 00:20:39.886 "name": null, 00:20:39.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.886 "is_configured": false, 00:20:39.886 "data_offset": 2048, 00:20:39.886 "data_size": 63488 00:20:39.886 }, 00:20:39.886 { 00:20:39.886 "name": null, 00:20:39.886 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.886 "is_configured": false, 00:20:39.886 "data_offset": 2048, 00:20:39.886 "data_size": 63488 00:20:39.886 } 00:20:39.886 ] 00:20:39.886 }' 00:20:39.886 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.886 15:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.146 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:20:40.146 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:40.146 [2024-07-23 15:14:35.485360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:40.146 [2024-07-23 15:14:35.485707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.146 [2024-07-23 15:14:35.485806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:20:40.146 [2024-07-23 15:14:35.485937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.146 [2024-07-23 15:14:35.486564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.146 [2024-07-23 15:14:35.486738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:40.146 [2024-07-23 15:14:35.486987] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:40.146 [2024-07-23 15:14:35.487175] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.146 pt2 00:20:40.146 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:40.405 [2024-07-23 15:14:35.665448] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.405 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.698 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.698 "name": "raid_bdev1", 00:20:40.698 "uuid": "4c35c99f-225f-427e-91ef-7124c8db54cc", 00:20:40.698 "strip_size_kb": 64, 00:20:40.698 "state": "configuring", 00:20:40.698 "raid_level": "raid0", 00:20:40.698 "superblock": true, 00:20:40.698 "num_base_bdevs": 4, 00:20:40.698 "num_base_bdevs_discovered": 1, 00:20:40.698 "num_base_bdevs_operational": 4, 00:20:40.698 "base_bdevs_list": [ 00:20:40.698 { 00:20:40.698 "name": "pt1", 00:20:40.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.698 "is_configured": true, 00:20:40.698 "data_offset": 2048, 00:20:40.698 "data_size": 63488 00:20:40.698 }, 00:20:40.698 { 00:20:40.698 "name": null, 00:20:40.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.698 "is_configured": false, 00:20:40.698 "data_offset": 2048, 00:20:40.698 "data_size": 63488 00:20:40.698 }, 00:20:40.698 { 00:20:40.698 "name": null, 00:20:40.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.698 "is_configured": false, 00:20:40.698 "data_offset": 2048, 00:20:40.698 "data_size": 63488 00:20:40.698 }, 00:20:40.698 { 00:20:40.698 "name": null, 00:20:40.698 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:40.698 "is_configured": false, 00:20:40.698 "data_offset": 2048, 00:20:40.698 "data_size": 63488 00:20:40.698 } 00:20:40.698 ] 00:20:40.698 }' 00:20:40.698 15:14:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.698 15:14:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.958 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:40.958 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:40.958 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:40.958 [2024-07-23 15:14:36.345531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:40.958 [2024-07-23 15:14:36.345652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.958 [2024-07-23 15:14:36.345685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:20:40.958 [2024-07-23 15:14:36.345713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.958 [2024-07-23 15:14:36.346608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.958 [2024-07-23 15:14:36.346785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:40.958 [2024-07-23 15:14:36.346926] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:40.958 [2024-07-23 15:14:36.346966] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.958 pt2 00:20:40.958 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:40.958 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:40.958 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:41.216 [2024-07-23 15:14:36.617611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:41.216 [2024-07-23 15:14:36.617742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.216 [2024-07-23 15:14:36.617777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:20:41.216 [2024-07-23 15:14:36.617819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.216 [2024-07-23 15:14:36.618416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.216 [2024-07-23 15:14:36.618469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:41.217 [2024-07-23 15:14:36.618573] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:41.217 [2024-07-23 15:14:36.618611] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:41.217 pt3 00:20:41.217 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:41.217 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:41.217 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:41.476 [2024-07-23 15:14:36.797637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:41.476 [2024-07-23 15:14:36.797942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.476 [2024-07-23 15:14:36.797995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:20:41.476 [2024-07-23 15:14:36.798027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.476 [2024-07-23 15:14:36.798630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.476 [2024-07-23 15:14:36.798671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:41.476 [2024-07-23 15:14:36.798770] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:41.476 [2024-07-23 15:14:36.798834] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:41.476 [2024-07-23 15:14:36.799011] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:20:41.476 [2024-07-23 15:14:36.799033] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:41.476 [2024-07-23 15:14:36.799125] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:20:41.476 [2024-07-23 15:14:36.799571] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:20:41.476 [2024-07-23 15:14:36.799598] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:20:41.476 [2024-07-23 15:14:36.799742] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.476 pt4 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.476 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.734 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.734 "name": "raid_bdev1", 00:20:41.734 "uuid": "4c35c99f-225f-427e-91ef-7124c8db54cc", 00:20:41.734 "strip_size_kb": 64, 00:20:41.734 "state": "online", 00:20:41.734 "raid_level": "raid0", 00:20:41.734 "superblock": true, 00:20:41.734 "num_base_bdevs": 4, 00:20:41.734 "num_base_bdevs_discovered": 4, 00:20:41.734 "num_base_bdevs_operational": 4, 00:20:41.734 "base_bdevs_list": [ 00:20:41.734 { 00:20:41.734 "name": "pt1", 00:20:41.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:41.734 "is_configured": true, 00:20:41.734 "data_offset": 2048, 00:20:41.734 "data_size": 63488 00:20:41.734 }, 00:20:41.734 { 00:20:41.734 "name": "pt2", 00:20:41.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.734 "is_configured": true, 00:20:41.735 "data_offset": 2048, 00:20:41.735 "data_size": 63488 00:20:41.735 }, 00:20:41.735 { 00:20:41.735 "name": "pt3", 00:20:41.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.735 "is_configured": true, 00:20:41.735 "data_offset": 2048, 00:20:41.735 "data_size": 63488 00:20:41.735 }, 00:20:41.735 { 00:20:41.735 "name": "pt4", 00:20:41.735 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.735 "is_configured": true, 00:20:41.735 "data_offset": 2048, 00:20:41.735 "data_size": 63488 00:20:41.735 } 00:20:41.735 ] 00:20:41.735 }' 00:20:41.735 15:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.735 15:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.993 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:41.993 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:41.993 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:41.993 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:41.993 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:41.993 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:41.993 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:41.993 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:42.252 [2024-07-23 15:14:37.514079] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.252 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:42.252 "name": "raid_bdev1", 00:20:42.252 "aliases": [ 00:20:42.252 "4c35c99f-225f-427e-91ef-7124c8db54cc" 00:20:42.252 ], 00:20:42.252 "product_name": "Raid Volume", 00:20:42.252 "block_size": 512, 00:20:42.252 "num_blocks": 253952, 00:20:42.252 "uuid": "4c35c99f-225f-427e-91ef-7124c8db54cc", 00:20:42.252 "assigned_rate_limits": { 00:20:42.252 "rw_ios_per_sec": 0, 00:20:42.252 "rw_mbytes_per_sec": 0, 00:20:42.252 "r_mbytes_per_sec": 0, 00:20:42.252 "w_mbytes_per_sec": 0 00:20:42.252 }, 00:20:42.252 "claimed": false, 00:20:42.252 "zoned": false, 00:20:42.252 "supported_io_types": { 00:20:42.252 "read": true, 00:20:42.252 "write": true, 00:20:42.252 "unmap": true, 00:20:42.252 "flush": true, 00:20:42.252 "reset": true, 00:20:42.252 "nvme_admin": false, 00:20:42.252 "nvme_io": false, 00:20:42.252 "nvme_io_md": false, 00:20:42.252 "write_zeroes": true, 00:20:42.252 "zcopy": false, 00:20:42.252 "get_zone_info": false, 00:20:42.252 "zone_management": false, 00:20:42.252 "zone_append": false, 00:20:42.252 "compare": false, 00:20:42.252 "compare_and_write": false, 00:20:42.252 "abort": false, 00:20:42.252 "seek_hole": false, 00:20:42.252 "seek_data": false, 00:20:42.252 "copy": false, 00:20:42.252 "nvme_iov_md": false 00:20:42.252 }, 00:20:42.252 "memory_domains": [ 00:20:42.252 { 00:20:42.252 "dma_device_id": "system", 00:20:42.252 "dma_device_type": 1 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.252 "dma_device_type": 2 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "dma_device_id": "system", 00:20:42.252 "dma_device_type": 1 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.252 "dma_device_type": 2 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "dma_device_id": "system", 00:20:42.252 "dma_device_type": 1 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.252 "dma_device_type": 2 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "dma_device_id": "system", 00:20:42.252 "dma_device_type": 1 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.252 "dma_device_type": 2 00:20:42.252 } 00:20:42.252 ], 00:20:42.252 "driver_specific": { 00:20:42.252 "raid": { 00:20:42.252 "uuid": "4c35c99f-225f-427e-91ef-7124c8db54cc", 00:20:42.252 "strip_size_kb": 64, 00:20:42.252 "state": "online", 00:20:42.252 "raid_level": "raid0", 00:20:42.252 "superblock": true, 00:20:42.252 "num_base_bdevs": 4, 00:20:42.252 "num_base_bdevs_discovered": 4, 00:20:42.252 "num_base_bdevs_operational": 4, 00:20:42.252 "base_bdevs_list": [ 00:20:42.252 { 00:20:42.252 "name": "pt1", 00:20:42.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.252 "is_configured": true, 00:20:42.252 "data_offset": 2048, 00:20:42.252 "data_size": 63488 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "name": "pt2", 00:20:42.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.252 "is_configured": true, 00:20:42.252 "data_offset": 2048, 00:20:42.252 "data_size": 63488 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "name": "pt3", 00:20:42.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:42.252 "is_configured": true, 00:20:42.252 "data_offset": 2048, 00:20:42.252 "data_size": 63488 00:20:42.252 }, 00:20:42.252 { 00:20:42.252 "name": "pt4", 00:20:42.252 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:42.252 "is_configured": true, 00:20:42.252 "data_offset": 2048, 00:20:42.252 "data_size": 63488 00:20:42.252 } 00:20:42.252 ] 00:20:42.252 } 00:20:42.252 } 00:20:42.252 }' 00:20:42.252 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:42.252 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:42.252 pt2 00:20:42.252 pt3 00:20:42.252 pt4' 00:20:42.252 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:42.252 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:42.252 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:42.511 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:42.511 "name": "pt1", 00:20:42.511 "aliases": [ 00:20:42.511 "00000000-0000-0000-0000-000000000001" 00:20:42.511 ], 00:20:42.511 "product_name": "passthru", 00:20:42.511 "block_size": 512, 00:20:42.511 "num_blocks": 65536, 00:20:42.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.511 "assigned_rate_limits": { 00:20:42.511 "rw_ios_per_sec": 0, 00:20:42.511 "rw_mbytes_per_sec": 0, 00:20:42.511 "r_mbytes_per_sec": 0, 00:20:42.511 "w_mbytes_per_sec": 0 00:20:42.511 }, 00:20:42.511 "claimed": true, 00:20:42.511 "claim_type": "exclusive_write", 00:20:42.511 "zoned": false, 00:20:42.511 "supported_io_types": { 00:20:42.511 "read": true, 00:20:42.511 "write": true, 00:20:42.511 "unmap": true, 00:20:42.511 "flush": true, 00:20:42.511 "reset": true, 00:20:42.511 "nvme_admin": false, 00:20:42.511 "nvme_io": false, 00:20:42.511 "nvme_io_md": false, 00:20:42.511 "write_zeroes": true, 00:20:42.511 "zcopy": true, 00:20:42.511 "get_zone_info": false, 00:20:42.511 "zone_management": false, 00:20:42.511 "zone_append": false, 00:20:42.511 "compare": false, 00:20:42.511 "compare_and_write": false, 00:20:42.511 "abort": true, 00:20:42.511 "seek_hole": false, 00:20:42.511 "seek_data": false, 00:20:42.511 "copy": true, 00:20:42.511 "nvme_iov_md": false 00:20:42.511 }, 00:20:42.511 "memory_domains": [ 00:20:42.511 { 00:20:42.511 "dma_device_id": "system", 00:20:42.511 "dma_device_type": 1 00:20:42.511 }, 00:20:42.511 { 00:20:42.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.511 "dma_device_type": 2 00:20:42.511 } 00:20:42.511 ], 00:20:42.511 "driver_specific": { 00:20:42.511 "passthru": { 00:20:42.511 "name": "pt1", 00:20:42.511 "base_bdev_name": "malloc1" 00:20:42.511 } 00:20:42.511 } 00:20:42.511 }' 00:20:42.511 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.511 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:42.512 15:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:42.771 "name": "pt2", 00:20:42.771 "aliases": [ 00:20:42.771 "00000000-0000-0000-0000-000000000002" 00:20:42.771 ], 00:20:42.771 "product_name": "passthru", 00:20:42.771 "block_size": 512, 00:20:42.771 "num_blocks": 65536, 00:20:42.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.771 "assigned_rate_limits": { 00:20:42.771 "rw_ios_per_sec": 0, 00:20:42.771 "rw_mbytes_per_sec": 0, 00:20:42.771 "r_mbytes_per_sec": 0, 00:20:42.771 "w_mbytes_per_sec": 0 00:20:42.771 }, 00:20:42.771 "claimed": true, 00:20:42.771 "claim_type": "exclusive_write", 00:20:42.771 "zoned": false, 00:20:42.771 "supported_io_types": { 00:20:42.771 "read": true, 00:20:42.771 "write": true, 00:20:42.771 "unmap": true, 00:20:42.771 "flush": true, 00:20:42.771 "reset": true, 00:20:42.771 "nvme_admin": false, 00:20:42.771 "nvme_io": false, 00:20:42.771 "nvme_io_md": false, 00:20:42.771 "write_zeroes": true, 00:20:42.771 "zcopy": true, 00:20:42.771 "get_zone_info": false, 00:20:42.771 "zone_management": false, 00:20:42.771 "zone_append": false, 00:20:42.771 "compare": false, 00:20:42.771 "compare_and_write": false, 00:20:42.771 "abort": true, 00:20:42.771 "seek_hole": false, 00:20:42.771 "seek_data": false, 00:20:42.771 "copy": true, 00:20:42.771 "nvme_iov_md": false 00:20:42.771 }, 00:20:42.771 "memory_domains": [ 00:20:42.771 { 00:20:42.771 "dma_device_id": "system", 00:20:42.771 "dma_device_type": 1 00:20:42.771 }, 00:20:42.771 { 00:20:42.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.771 "dma_device_type": 2 00:20:42.771 } 00:20:42.771 ], 00:20:42.771 "driver_specific": { 00:20:42.771 "passthru": { 00:20:42.771 "name": "pt2", 00:20:42.771 "base_bdev_name": "malloc2" 00:20:42.771 } 00:20:42.771 } 00:20:42.771 }' 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:42.771 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:43.030 "name": "pt3", 00:20:43.030 "aliases": [ 00:20:43.030 "00000000-0000-0000-0000-000000000003" 00:20:43.030 ], 00:20:43.030 "product_name": "passthru", 00:20:43.030 "block_size": 512, 00:20:43.030 "num_blocks": 65536, 00:20:43.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:43.030 "assigned_rate_limits": { 00:20:43.030 "rw_ios_per_sec": 0, 00:20:43.030 "rw_mbytes_per_sec": 0, 00:20:43.030 "r_mbytes_per_sec": 0, 00:20:43.030 "w_mbytes_per_sec": 0 00:20:43.030 }, 00:20:43.030 "claimed": true, 00:20:43.030 "claim_type": "exclusive_write", 00:20:43.030 "zoned": false, 00:20:43.030 "supported_io_types": { 00:20:43.030 "read": true, 00:20:43.030 "write": true, 00:20:43.030 "unmap": true, 00:20:43.030 "flush": true, 00:20:43.030 "reset": true, 00:20:43.030 "nvme_admin": false, 00:20:43.030 "nvme_io": false, 00:20:43.030 "nvme_io_md": false, 00:20:43.030 "write_zeroes": true, 00:20:43.030 "zcopy": true, 00:20:43.030 "get_zone_info": false, 00:20:43.030 "zone_management": false, 00:20:43.030 "zone_append": false, 00:20:43.030 "compare": false, 00:20:43.030 "compare_and_write": false, 00:20:43.030 "abort": true, 00:20:43.030 "seek_hole": false, 00:20:43.030 "seek_data": false, 00:20:43.030 "copy": true, 00:20:43.030 "nvme_iov_md": false 00:20:43.030 }, 00:20:43.030 "memory_domains": [ 00:20:43.030 { 00:20:43.030 "dma_device_id": "system", 00:20:43.030 "dma_device_type": 1 00:20:43.030 }, 00:20:43.030 { 00:20:43.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.030 "dma_device_type": 2 00:20:43.030 } 00:20:43.030 ], 00:20:43.030 "driver_specific": { 00:20:43.030 "passthru": { 00:20:43.030 "name": "pt3", 00:20:43.030 "base_bdev_name": "malloc3" 00:20:43.030 } 00:20:43.030 } 00:20:43.030 }' 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.030 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.031 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:43.031 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:43.031 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:43.290 "name": "pt4", 00:20:43.290 "aliases": [ 00:20:43.290 "00000000-0000-0000-0000-000000000004" 00:20:43.290 ], 00:20:43.290 "product_name": "passthru", 00:20:43.290 "block_size": 512, 00:20:43.290 "num_blocks": 65536, 00:20:43.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:43.290 "assigned_rate_limits": { 00:20:43.290 "rw_ios_per_sec": 0, 00:20:43.290 "rw_mbytes_per_sec": 0, 00:20:43.290 "r_mbytes_per_sec": 0, 00:20:43.290 "w_mbytes_per_sec": 0 00:20:43.290 }, 00:20:43.290 "claimed": true, 00:20:43.290 "claim_type": "exclusive_write", 00:20:43.290 "zoned": false, 00:20:43.290 "supported_io_types": { 00:20:43.290 "read": true, 00:20:43.290 "write": true, 00:20:43.290 "unmap": true, 00:20:43.290 "flush": true, 00:20:43.290 "reset": true, 00:20:43.290 "nvme_admin": false, 00:20:43.290 "nvme_io": false, 00:20:43.290 "nvme_io_md": false, 00:20:43.290 "write_zeroes": true, 00:20:43.290 "zcopy": true, 00:20:43.290 "get_zone_info": false, 00:20:43.290 "zone_management": false, 00:20:43.290 "zone_append": false, 00:20:43.290 "compare": false, 00:20:43.290 "compare_and_write": false, 00:20:43.290 "abort": true, 00:20:43.290 "seek_hole": false, 00:20:43.290 "seek_data": false, 00:20:43.290 "copy": true, 00:20:43.290 "nvme_iov_md": false 00:20:43.290 }, 00:20:43.290 "memory_domains": [ 00:20:43.290 { 00:20:43.290 "dma_device_id": "system", 00:20:43.290 "dma_device_type": 1 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.290 "dma_device_type": 2 00:20:43.290 } 00:20:43.290 ], 00:20:43.290 "driver_specific": { 00:20:43.290 "passthru": { 00:20:43.290 "name": "pt4", 00:20:43.290 "base_bdev_name": "malloc4" 00:20:43.290 } 00:20:43.290 } 00:20:43.290 }' 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:43.290 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:43.549 [2024-07-23 15:14:38.883011] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4c35c99f-225f-427e-91ef-7124c8db54cc '!=' 4c35c99f-225f-427e-91ef-7124c8db54cc ']' 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 100780 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 100780 ']' 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 100780 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100780 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.549 killing process with pid 100780 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100780' 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 100780 00:20:43.549 15:14:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 100780 00:20:43.549 [2024-07-23 15:14:38.940529] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.549 [2024-07-23 15:14:38.940654] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.549 [2024-07-23 15:14:38.940745] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.549 [2024-07-23 15:14:38.940768] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:20:43.807 [2024-07-23 15:14:38.987284] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:43.807 15:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:43.807 00:20:43.807 real 0m10.954s 00:20:43.807 user 0m18.840s 00:20:43.807 sys 0m2.582s 00:20:43.807 15:14:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.807 ************************************ 00:20:43.807 END TEST raid_superblock_test 00:20:43.807 ************************************ 00:20:43.807 15:14:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.066 15:14:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:44.066 15:14:39 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:20:44.066 15:14:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:44.066 15:14:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.066 15:14:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.066 ************************************ 00:20:44.066 START TEST raid_read_error_test 00:20:44.066 ************************************ 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.de3cDQm2Ex 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=101238 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 101238 /var/tmp/spdk-raid.sock 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 101238 ']' 00:20:44.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.066 15:14:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.066 [2024-07-23 15:14:39.373612] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:20:44.066 [2024-07-23 15:14:39.374749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101238 ] 00:20:44.325 [2024-07-23 15:14:39.528496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.325 [2024-07-23 15:14:39.585117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.325 [2024-07-23 15:14:39.638627] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.891 15:14:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.891 15:14:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:44.891 15:14:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:44.891 15:14:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:45.149 BaseBdev1_malloc 00:20:45.149 15:14:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:45.407 true 00:20:45.666 15:14:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:45.666 [2024-07-23 15:14:40.996912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:45.666 [2024-07-23 15:14:40.997017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.666 [2024-07-23 15:14:40.997058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:20:45.666 [2024-07-23 15:14:40.997080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.666 [2024-07-23 15:14:40.999726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.666 [2024-07-23 15:14:40.999771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:45.666 BaseBdev1 00:20:45.666 15:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:45.666 15:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:45.924 BaseBdev2_malloc 00:20:45.925 15:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:46.182 true 00:20:46.182 15:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:46.440 [2024-07-23 15:14:41.666380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:46.440 [2024-07-23 15:14:41.666619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.440 [2024-07-23 15:14:41.666688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:20:46.440 [2024-07-23 15:14:41.666769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.440 [2024-07-23 15:14:41.669391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.440 [2024-07-23 15:14:41.669543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:46.440 BaseBdev2 00:20:46.440 15:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:46.440 15:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:46.440 BaseBdev3_malloc 00:20:46.698 15:14:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:46.698 true 00:20:46.698 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:46.956 [2024-07-23 15:14:42.217972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:46.956 [2024-07-23 15:14:42.218048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.956 [2024-07-23 15:14:42.218078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:20:46.956 [2024-07-23 15:14:42.218091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.956 [2024-07-23 15:14:42.220751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.956 BaseBdev3 00:20:46.956 [2024-07-23 15:14:42.220918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:46.956 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:46.956 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:47.215 BaseBdev4_malloc 00:20:47.215 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:47.215 true 00:20:47.215 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:47.474 [2024-07-23 15:14:42.799464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:47.474 [2024-07-23 15:14:42.799677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.474 [2024-07-23 15:14:42.799745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:20:47.474 [2024-07-23 15:14:42.799865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.474 [2024-07-23 15:14:42.802459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.474 [2024-07-23 15:14:42.802595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:47.474 BaseBdev4 00:20:47.474 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:47.732 [2024-07-23 15:14:42.967586] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.732 [2024-07-23 15:14:42.969991] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.732 [2024-07-23 15:14:42.970206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:47.732 [2024-07-23 15:14:42.970364] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:47.732 [2024-07-23 15:14:42.970635] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:20:47.732 [2024-07-23 15:14:42.970687] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:47.732 [2024-07-23 15:14:42.970891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:20:47.732 [2024-07-23 15:14:42.971311] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:20:47.732 [2024-07-23 15:14:42.971422] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:20:47.732 [2024-07-23 15:14:42.971693] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.732 15:14:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.991 15:14:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.991 "name": "raid_bdev1", 00:20:47.991 "uuid": "e15ce868-b938-4e42-9821-3779d23df615", 00:20:47.991 "strip_size_kb": 64, 00:20:47.991 "state": "online", 00:20:47.991 "raid_level": "raid0", 00:20:47.991 "superblock": true, 00:20:47.991 "num_base_bdevs": 4, 00:20:47.991 "num_base_bdevs_discovered": 4, 00:20:47.991 "num_base_bdevs_operational": 4, 00:20:47.991 "base_bdevs_list": [ 00:20:47.991 { 00:20:47.991 "name": "BaseBdev1", 00:20:47.991 "uuid": "7805b306-c26c-5634-b8a6-ef52c8d8cf01", 00:20:47.991 "is_configured": true, 00:20:47.991 "data_offset": 2048, 00:20:47.991 "data_size": 63488 00:20:47.991 }, 00:20:47.991 { 00:20:47.991 "name": "BaseBdev2", 00:20:47.991 "uuid": "4bb7b96f-3b3f-510c-ae47-0159015dc675", 00:20:47.991 "is_configured": true, 00:20:47.991 "data_offset": 2048, 00:20:47.991 "data_size": 63488 00:20:47.991 }, 00:20:47.991 { 00:20:47.991 "name": "BaseBdev3", 00:20:47.991 "uuid": "dab3a3cc-3440-5dd5-8bdc-a1cf31d07002", 00:20:47.991 "is_configured": true, 00:20:47.991 "data_offset": 2048, 00:20:47.991 "data_size": 63488 00:20:47.991 }, 00:20:47.991 { 00:20:47.991 "name": "BaseBdev4", 00:20:47.991 "uuid": "ace52f21-13aa-5d4a-a186-7f66e13c5e37", 00:20:47.991 "is_configured": true, 00:20:47.991 "data_offset": 2048, 00:20:47.991 "data_size": 63488 00:20:47.991 } 00:20:47.991 ] 00:20:47.991 }' 00:20:47.991 15:14:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.991 15:14:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.248 15:14:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:48.248 15:14:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:48.248 [2024-07-23 15:14:43.568269] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:20:49.261 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:49.519 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:49.519 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:49.519 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:20:49.519 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:49.519 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:49.519 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:49.519 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:49.519 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.520 "name": "raid_bdev1", 00:20:49.520 "uuid": "e15ce868-b938-4e42-9821-3779d23df615", 00:20:49.520 "strip_size_kb": 64, 00:20:49.520 "state": "online", 00:20:49.520 "raid_level": "raid0", 00:20:49.520 "superblock": true, 00:20:49.520 "num_base_bdevs": 4, 00:20:49.520 "num_base_bdevs_discovered": 4, 00:20:49.520 "num_base_bdevs_operational": 4, 00:20:49.520 "base_bdevs_list": [ 00:20:49.520 { 00:20:49.520 "name": "BaseBdev1", 00:20:49.520 "uuid": "7805b306-c26c-5634-b8a6-ef52c8d8cf01", 00:20:49.520 "is_configured": true, 00:20:49.520 "data_offset": 2048, 00:20:49.520 "data_size": 63488 00:20:49.520 }, 00:20:49.520 { 00:20:49.520 "name": "BaseBdev2", 00:20:49.520 "uuid": "4bb7b96f-3b3f-510c-ae47-0159015dc675", 00:20:49.520 "is_configured": true, 00:20:49.520 "data_offset": 2048, 00:20:49.520 "data_size": 63488 00:20:49.520 }, 00:20:49.520 { 00:20:49.520 "name": "BaseBdev3", 00:20:49.520 "uuid": "dab3a3cc-3440-5dd5-8bdc-a1cf31d07002", 00:20:49.520 "is_configured": true, 00:20:49.520 "data_offset": 2048, 00:20:49.520 "data_size": 63488 00:20:49.520 }, 00:20:49.520 { 00:20:49.520 "name": "BaseBdev4", 00:20:49.520 "uuid": "ace52f21-13aa-5d4a-a186-7f66e13c5e37", 00:20:49.520 "is_configured": true, 00:20:49.520 "data_offset": 2048, 00:20:49.520 "data_size": 63488 00:20:49.520 } 00:20:49.520 ] 00:20:49.520 }' 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.520 15:14:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.778 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:50.037 [2024-07-23 15:14:45.366132] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:50.037 [2024-07-23 15:14:45.366188] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:50.037 [2024-07-23 15:14:45.368611] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.037 [2024-07-23 15:14:45.368672] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.037 [2024-07-23 15:14:45.368720] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.037 [2024-07-23 15:14:45.368739] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:20:50.037 0 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 101238 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 101238 ']' 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 101238 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101238 00:20:50.037 killing process with pid 101238 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101238' 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 101238 00:20:50.037 [2024-07-23 15:14:45.421197] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.037 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 101238 00:20:50.037 [2024-07-23 15:14:45.456204] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.de3cDQm2Ex 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:20:50.296 00:20:50.296 real 0m6.409s 00:20:50.296 user 0m9.824s 00:20:50.296 sys 0m1.100s 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.296 15:14:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.296 ************************************ 00:20:50.296 END TEST raid_read_error_test 00:20:50.296 ************************************ 00:20:50.554 15:14:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:50.554 15:14:45 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:20:50.554 15:14:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:50.554 15:14:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.554 15:14:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:50.554 ************************************ 00:20:50.554 START TEST raid_write_error_test 00:20:50.554 ************************************ 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.owGtb9YoMl 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=101416 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 101416 /var/tmp/spdk-raid.sock 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 101416 ']' 00:20:50.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.554 15:14:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.554 [2024-07-23 15:14:45.879354] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:20:50.554 [2024-07-23 15:14:45.879925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101416 ] 00:20:50.812 [2024-07-23 15:14:46.046154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.812 [2024-07-23 15:14:46.094145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.812 [2024-07-23 15:14:46.139099] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.746 15:14:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.746 15:14:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:51.746 15:14:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:51.746 15:14:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:51.746 BaseBdev1_malloc 00:20:51.746 15:14:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:51.746 true 00:20:52.004 15:14:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:52.004 [2024-07-23 15:14:47.342619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:52.004 [2024-07-23 15:14:47.342905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.004 [2024-07-23 15:14:47.342956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:20:52.004 [2024-07-23 15:14:47.342970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.004 [2024-07-23 15:14:47.345737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.004 [2024-07-23 15:14:47.345782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:52.004 BaseBdev1 00:20:52.004 15:14:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:52.004 15:14:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:52.262 BaseBdev2_malloc 00:20:52.263 15:14:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:52.521 true 00:20:52.521 15:14:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:52.521 [2024-07-23 15:14:47.924062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:52.521 [2024-07-23 15:14:47.924146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.521 [2024-07-23 15:14:47.924179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:20:52.521 [2024-07-23 15:14:47.924191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.521 [2024-07-23 15:14:47.926903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.521 [2024-07-23 15:14:47.927059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:52.521 BaseBdev2 00:20:52.521 15:14:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:52.521 15:14:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:52.779 BaseBdev3_malloc 00:20:52.779 15:14:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:53.037 true 00:20:53.037 15:14:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:53.296 [2024-07-23 15:14:48.478295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:53.296 [2024-07-23 15:14:48.478375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.296 [2024-07-23 15:14:48.478407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:20:53.296 [2024-07-23 15:14:48.478419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.296 [2024-07-23 15:14:48.480989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.296 [2024-07-23 15:14:48.481030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:53.296 BaseBdev3 00:20:53.296 15:14:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:53.296 15:14:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:53.296 BaseBdev4_malloc 00:20:53.296 15:14:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:53.554 true 00:20:53.554 15:14:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:53.811 [2024-07-23 15:14:49.143766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:53.811 [2024-07-23 15:14:49.144059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.811 [2024-07-23 15:14:49.144129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:20:53.811 [2024-07-23 15:14:49.144229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.811 [2024-07-23 15:14:49.146710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.811 [2024-07-23 15:14:49.146878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:53.811 BaseBdev4 00:20:53.811 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:54.068 [2024-07-23 15:14:49.327908] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.068 [2024-07-23 15:14:49.330380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.068 [2024-07-23 15:14:49.330605] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.068 [2024-07-23 15:14:49.330700] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:54.068 [2024-07-23 15:14:49.331054] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:20:54.068 [2024-07-23 15:14:49.331174] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:54.068 [2024-07-23 15:14:49.331341] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:20:54.068 [2024-07-23 15:14:49.331701] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:20:54.068 [2024-07-23 15:14:49.331748] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:20:54.068 [2024-07-23 15:14:49.332057] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.068 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.327 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.327 "name": "raid_bdev1", 00:20:54.327 "uuid": "657a2705-8e4a-44c3-9f28-0a5d7c08ad2d", 00:20:54.327 "strip_size_kb": 64, 00:20:54.327 "state": "online", 00:20:54.327 "raid_level": "raid0", 00:20:54.327 "superblock": true, 00:20:54.327 "num_base_bdevs": 4, 00:20:54.327 "num_base_bdevs_discovered": 4, 00:20:54.327 "num_base_bdevs_operational": 4, 00:20:54.327 "base_bdevs_list": [ 00:20:54.327 { 00:20:54.327 "name": "BaseBdev1", 00:20:54.327 "uuid": "ea7b84a2-d512-5f6c-a8d8-07b4ebdd3a00", 00:20:54.327 "is_configured": true, 00:20:54.327 "data_offset": 2048, 00:20:54.327 "data_size": 63488 00:20:54.327 }, 00:20:54.327 { 00:20:54.327 "name": "BaseBdev2", 00:20:54.327 "uuid": "a0e7d39f-e948-50ba-8ac7-8e10126d9c31", 00:20:54.327 "is_configured": true, 00:20:54.327 "data_offset": 2048, 00:20:54.327 "data_size": 63488 00:20:54.327 }, 00:20:54.327 { 00:20:54.327 "name": "BaseBdev3", 00:20:54.327 "uuid": "20407ab7-4769-5325-8154-d1353e8756cf", 00:20:54.327 "is_configured": true, 00:20:54.327 "data_offset": 2048, 00:20:54.327 "data_size": 63488 00:20:54.327 }, 00:20:54.327 { 00:20:54.327 "name": "BaseBdev4", 00:20:54.327 "uuid": "e02befbc-7767-5cdf-828d-771f09a874fc", 00:20:54.327 "is_configured": true, 00:20:54.327 "data_offset": 2048, 00:20:54.327 "data_size": 63488 00:20:54.327 } 00:20:54.327 ] 00:20:54.327 }' 00:20:54.327 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.327 15:14:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.585 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:54.585 15:14:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:54.843 [2024-07-23 15:14:50.032686] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:20:55.778 15:14:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.778 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.037 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.037 "name": "raid_bdev1", 00:20:56.037 "uuid": "657a2705-8e4a-44c3-9f28-0a5d7c08ad2d", 00:20:56.037 "strip_size_kb": 64, 00:20:56.037 "state": "online", 00:20:56.037 "raid_level": "raid0", 00:20:56.037 "superblock": true, 00:20:56.037 "num_base_bdevs": 4, 00:20:56.037 "num_base_bdevs_discovered": 4, 00:20:56.037 "num_base_bdevs_operational": 4, 00:20:56.037 "base_bdevs_list": [ 00:20:56.037 { 00:20:56.037 "name": "BaseBdev1", 00:20:56.037 "uuid": "ea7b84a2-d512-5f6c-a8d8-07b4ebdd3a00", 00:20:56.037 "is_configured": true, 00:20:56.037 "data_offset": 2048, 00:20:56.037 "data_size": 63488 00:20:56.037 }, 00:20:56.037 { 00:20:56.037 "name": "BaseBdev2", 00:20:56.037 "uuid": "a0e7d39f-e948-50ba-8ac7-8e10126d9c31", 00:20:56.037 "is_configured": true, 00:20:56.037 "data_offset": 2048, 00:20:56.037 "data_size": 63488 00:20:56.037 }, 00:20:56.037 { 00:20:56.037 "name": "BaseBdev3", 00:20:56.037 "uuid": "20407ab7-4769-5325-8154-d1353e8756cf", 00:20:56.037 "is_configured": true, 00:20:56.037 "data_offset": 2048, 00:20:56.037 "data_size": 63488 00:20:56.037 }, 00:20:56.037 { 00:20:56.037 "name": "BaseBdev4", 00:20:56.037 "uuid": "e02befbc-7767-5cdf-828d-771f09a874fc", 00:20:56.037 "is_configured": true, 00:20:56.037 "data_offset": 2048, 00:20:56.037 "data_size": 63488 00:20:56.037 } 00:20:56.037 ] 00:20:56.037 }' 00:20:56.037 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.037 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:56.609 [2024-07-23 15:14:51.904227] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.609 [2024-07-23 15:14:51.904496] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.609 [2024-07-23 15:14:51.907126] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.609 [2024-07-23 15:14:51.907293] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.609 [2024-07-23 15:14:51.907426] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.609 [2024-07-23 15:14:51.907534] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:20:56.609 0 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 101416 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 101416 ']' 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 101416 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101416 00:20:56.609 killing process with pid 101416 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101416' 00:20:56.609 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 101416 00:20:56.609 [2024-07-23 15:14:51.957593] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.610 15:14:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 101416 00:20:56.610 [2024-07-23 15:14:51.992662] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:56.868 15:14:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:56.868 15:14:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.owGtb9YoMl 00:20:56.868 15:14:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:56.868 ************************************ 00:20:56.868 END TEST raid_write_error_test 00:20:56.868 ************************************ 00:20:56.868 15:14:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:20:56.868 15:14:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:56.868 15:14:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:56.868 15:14:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:56.869 15:14:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:20:56.869 00:20:56.869 real 0m6.477s 00:20:56.869 user 0m9.938s 00:20:56.869 sys 0m1.155s 00:20:56.869 15:14:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.869 15:14:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.869 15:14:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:56.869 15:14:52 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:56.869 15:14:52 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:20:56.869 15:14:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:56.869 15:14:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.869 15:14:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:57.127 ************************************ 00:20:57.127 START TEST raid_state_function_test 00:20:57.127 ************************************ 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:57.127 Process raid pid: 101591 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=101591 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 101591' 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 101591 /var/tmp/spdk-raid.sock 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 101591 ']' 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.127 15:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.127 [2024-07-23 15:14:52.377924] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:20:57.128 [2024-07-23 15:14:52.379571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.128 [2024-07-23 15:14:52.531897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.386 [2024-07-23 15:14:52.579842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.386 [2024-07-23 15:14:52.624520] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.955 15:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.955 15:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:57.955 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:58.213 [2024-07-23 15:14:53.486111] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:58.213 [2024-07-23 15:14:53.486188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:58.213 [2024-07-23 15:14:53.486203] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:58.213 [2024-07-23 15:14:53.486218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:58.213 [2024-07-23 15:14:53.486229] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:58.213 [2024-07-23 15:14:53.486244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:58.213 [2024-07-23 15:14:53.486252] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:58.213 [2024-07-23 15:14:53.486267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.213 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.471 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:58.471 "name": "Existed_Raid", 00:20:58.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.471 "strip_size_kb": 64, 00:20:58.471 "state": "configuring", 00:20:58.471 "raid_level": "concat", 00:20:58.471 "superblock": false, 00:20:58.471 "num_base_bdevs": 4, 00:20:58.471 "num_base_bdevs_discovered": 0, 00:20:58.471 "num_base_bdevs_operational": 4, 00:20:58.471 "base_bdevs_list": [ 00:20:58.471 { 00:20:58.471 "name": "BaseBdev1", 00:20:58.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.471 "is_configured": false, 00:20:58.471 "data_offset": 0, 00:20:58.471 "data_size": 0 00:20:58.471 }, 00:20:58.471 { 00:20:58.471 "name": "BaseBdev2", 00:20:58.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.471 "is_configured": false, 00:20:58.471 "data_offset": 0, 00:20:58.471 "data_size": 0 00:20:58.471 }, 00:20:58.471 { 00:20:58.471 "name": "BaseBdev3", 00:20:58.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.471 "is_configured": false, 00:20:58.471 "data_offset": 0, 00:20:58.471 "data_size": 0 00:20:58.471 }, 00:20:58.471 { 00:20:58.471 "name": "BaseBdev4", 00:20:58.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.471 "is_configured": false, 00:20:58.471 "data_offset": 0, 00:20:58.471 "data_size": 0 00:20:58.471 } 00:20:58.471 ] 00:20:58.471 }' 00:20:58.471 15:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:58.471 15:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.729 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:58.988 [2024-07-23 15:14:54.166130] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:58.988 [2024-07-23 15:14:54.166196] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:20:58.988 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:58.988 [2024-07-23 15:14:54.346206] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:58.988 [2024-07-23 15:14:54.346276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:58.988 [2024-07-23 15:14:54.346287] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:58.988 [2024-07-23 15:14:54.346300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:58.988 [2024-07-23 15:14:54.346308] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:58.988 [2024-07-23 15:14:54.346320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:58.988 [2024-07-23 15:14:54.346328] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:58.988 [2024-07-23 15:14:54.346340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:58.988 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:59.246 [2024-07-23 15:14:54.603921] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.246 BaseBdev1 00:20:59.246 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:59.246 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:59.246 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:59.246 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:59.246 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:59.246 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:59.246 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:59.504 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:59.763 [ 00:20:59.763 { 00:20:59.763 "name": "BaseBdev1", 00:20:59.763 "aliases": [ 00:20:59.763 "3806521e-4253-479d-b527-a759c83bc342" 00:20:59.763 ], 00:20:59.763 "product_name": "Malloc disk", 00:20:59.763 "block_size": 512, 00:20:59.763 "num_blocks": 65536, 00:20:59.763 "uuid": "3806521e-4253-479d-b527-a759c83bc342", 00:20:59.763 "assigned_rate_limits": { 00:20:59.763 "rw_ios_per_sec": 0, 00:20:59.763 "rw_mbytes_per_sec": 0, 00:20:59.763 "r_mbytes_per_sec": 0, 00:20:59.763 "w_mbytes_per_sec": 0 00:20:59.763 }, 00:20:59.763 "claimed": true, 00:20:59.763 "claim_type": "exclusive_write", 00:20:59.763 "zoned": false, 00:20:59.763 "supported_io_types": { 00:20:59.763 "read": true, 00:20:59.763 "write": true, 00:20:59.763 "unmap": true, 00:20:59.763 "flush": true, 00:20:59.763 "reset": true, 00:20:59.763 "nvme_admin": false, 00:20:59.763 "nvme_io": false, 00:20:59.763 "nvme_io_md": false, 00:20:59.763 "write_zeroes": true, 00:20:59.763 "zcopy": true, 00:20:59.763 "get_zone_info": false, 00:20:59.763 "zone_management": false, 00:20:59.763 "zone_append": false, 00:20:59.763 "compare": false, 00:20:59.763 "compare_and_write": false, 00:20:59.763 "abort": true, 00:20:59.763 "seek_hole": false, 00:20:59.763 "seek_data": false, 00:20:59.763 "copy": true, 00:20:59.763 "nvme_iov_md": false 00:20:59.763 }, 00:20:59.763 "memory_domains": [ 00:20:59.763 { 00:20:59.763 "dma_device_id": "system", 00:20:59.763 "dma_device_type": 1 00:20:59.763 }, 00:20:59.763 { 00:20:59.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.763 "dma_device_type": 2 00:20:59.763 } 00:20:59.763 ], 00:20:59.763 "driver_specific": {} 00:20:59.763 } 00:20:59.763 ] 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.763 15:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.021 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.021 "name": "Existed_Raid", 00:21:00.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.021 "strip_size_kb": 64, 00:21:00.021 "state": "configuring", 00:21:00.021 "raid_level": "concat", 00:21:00.021 "superblock": false, 00:21:00.021 "num_base_bdevs": 4, 00:21:00.021 "num_base_bdevs_discovered": 1, 00:21:00.021 "num_base_bdevs_operational": 4, 00:21:00.021 "base_bdevs_list": [ 00:21:00.021 { 00:21:00.021 "name": "BaseBdev1", 00:21:00.021 "uuid": "3806521e-4253-479d-b527-a759c83bc342", 00:21:00.021 "is_configured": true, 00:21:00.021 "data_offset": 0, 00:21:00.021 "data_size": 65536 00:21:00.021 }, 00:21:00.021 { 00:21:00.021 "name": "BaseBdev2", 00:21:00.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.021 "is_configured": false, 00:21:00.021 "data_offset": 0, 00:21:00.021 "data_size": 0 00:21:00.021 }, 00:21:00.021 { 00:21:00.021 "name": "BaseBdev3", 00:21:00.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.021 "is_configured": false, 00:21:00.021 "data_offset": 0, 00:21:00.021 "data_size": 0 00:21:00.021 }, 00:21:00.021 { 00:21:00.021 "name": "BaseBdev4", 00:21:00.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.021 "is_configured": false, 00:21:00.021 "data_offset": 0, 00:21:00.021 "data_size": 0 00:21:00.021 } 00:21:00.021 ] 00:21:00.021 }' 00:21:00.021 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.021 15:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.280 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:00.280 [2024-07-23 15:14:55.652197] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:00.280 [2024-07-23 15:14:55.652270] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:21:00.280 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:00.538 [2024-07-23 15:14:55.832338] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:00.538 [2024-07-23 15:14:55.834717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:00.538 [2024-07-23 15:14:55.834907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:00.538 [2024-07-23 15:14:55.834929] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:00.538 [2024-07-23 15:14:55.834944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:00.538 [2024-07-23 15:14:55.834952] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:00.538 [2024-07-23 15:14:55.834964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.538 15:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.797 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.797 "name": "Existed_Raid", 00:21:00.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.797 "strip_size_kb": 64, 00:21:00.797 "state": "configuring", 00:21:00.797 "raid_level": "concat", 00:21:00.797 "superblock": false, 00:21:00.797 "num_base_bdevs": 4, 00:21:00.797 "num_base_bdevs_discovered": 1, 00:21:00.797 "num_base_bdevs_operational": 4, 00:21:00.797 "base_bdevs_list": [ 00:21:00.797 { 00:21:00.797 "name": "BaseBdev1", 00:21:00.797 "uuid": "3806521e-4253-479d-b527-a759c83bc342", 00:21:00.797 "is_configured": true, 00:21:00.797 "data_offset": 0, 00:21:00.797 "data_size": 65536 00:21:00.797 }, 00:21:00.797 { 00:21:00.797 "name": "BaseBdev2", 00:21:00.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.797 "is_configured": false, 00:21:00.797 "data_offset": 0, 00:21:00.797 "data_size": 0 00:21:00.797 }, 00:21:00.797 { 00:21:00.797 "name": "BaseBdev3", 00:21:00.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.797 "is_configured": false, 00:21:00.797 "data_offset": 0, 00:21:00.797 "data_size": 0 00:21:00.797 }, 00:21:00.797 { 00:21:00.797 "name": "BaseBdev4", 00:21:00.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.797 "is_configured": false, 00:21:00.797 "data_offset": 0, 00:21:00.797 "data_size": 0 00:21:00.797 } 00:21:00.797 ] 00:21:00.797 }' 00:21:00.797 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.797 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.055 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:01.313 [2024-07-23 15:14:56.595605] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:01.313 BaseBdev2 00:21:01.313 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:01.313 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:01.313 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:01.313 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:01.313 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:01.313 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:01.313 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:01.571 [ 00:21:01.571 { 00:21:01.571 "name": "BaseBdev2", 00:21:01.571 "aliases": [ 00:21:01.571 "9c850779-2e6b-4c3d-8418-518696bafaeb" 00:21:01.571 ], 00:21:01.571 "product_name": "Malloc disk", 00:21:01.571 "block_size": 512, 00:21:01.571 "num_blocks": 65536, 00:21:01.571 "uuid": "9c850779-2e6b-4c3d-8418-518696bafaeb", 00:21:01.571 "assigned_rate_limits": { 00:21:01.571 "rw_ios_per_sec": 0, 00:21:01.571 "rw_mbytes_per_sec": 0, 00:21:01.571 "r_mbytes_per_sec": 0, 00:21:01.571 "w_mbytes_per_sec": 0 00:21:01.571 }, 00:21:01.571 "claimed": true, 00:21:01.571 "claim_type": "exclusive_write", 00:21:01.571 "zoned": false, 00:21:01.571 "supported_io_types": { 00:21:01.571 "read": true, 00:21:01.571 "write": true, 00:21:01.571 "unmap": true, 00:21:01.571 "flush": true, 00:21:01.571 "reset": true, 00:21:01.571 "nvme_admin": false, 00:21:01.571 "nvme_io": false, 00:21:01.571 "nvme_io_md": false, 00:21:01.571 "write_zeroes": true, 00:21:01.571 "zcopy": true, 00:21:01.571 "get_zone_info": false, 00:21:01.571 "zone_management": false, 00:21:01.571 "zone_append": false, 00:21:01.571 "compare": false, 00:21:01.571 "compare_and_write": false, 00:21:01.571 "abort": true, 00:21:01.571 "seek_hole": false, 00:21:01.571 "seek_data": false, 00:21:01.571 "copy": true, 00:21:01.571 "nvme_iov_md": false 00:21:01.571 }, 00:21:01.571 "memory_domains": [ 00:21:01.571 { 00:21:01.571 "dma_device_id": "system", 00:21:01.571 "dma_device_type": 1 00:21:01.571 }, 00:21:01.571 { 00:21:01.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.571 "dma_device_type": 2 00:21:01.571 } 00:21:01.571 ], 00:21:01.571 "driver_specific": {} 00:21:01.571 } 00:21:01.571 ] 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.571 15:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.830 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.830 "name": "Existed_Raid", 00:21:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.830 "strip_size_kb": 64, 00:21:01.830 "state": "configuring", 00:21:01.830 "raid_level": "concat", 00:21:01.830 "superblock": false, 00:21:01.830 "num_base_bdevs": 4, 00:21:01.830 "num_base_bdevs_discovered": 2, 00:21:01.830 "num_base_bdevs_operational": 4, 00:21:01.830 "base_bdevs_list": [ 00:21:01.830 { 00:21:01.830 "name": "BaseBdev1", 00:21:01.830 "uuid": "3806521e-4253-479d-b527-a759c83bc342", 00:21:01.830 "is_configured": true, 00:21:01.830 "data_offset": 0, 00:21:01.830 "data_size": 65536 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "name": "BaseBdev2", 00:21:01.830 "uuid": "9c850779-2e6b-4c3d-8418-518696bafaeb", 00:21:01.830 "is_configured": true, 00:21:01.830 "data_offset": 0, 00:21:01.830 "data_size": 65536 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "name": "BaseBdev3", 00:21:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.830 "is_configured": false, 00:21:01.830 "data_offset": 0, 00:21:01.830 "data_size": 0 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "name": "BaseBdev4", 00:21:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.830 "is_configured": false, 00:21:01.830 "data_offset": 0, 00:21:01.830 "data_size": 0 00:21:01.830 } 00:21:01.830 ] 00:21:01.830 }' 00:21:01.830 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.830 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.397 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:02.397 [2024-07-23 15:14:57.763157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.397 BaseBdev3 00:21:02.397 15:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:02.397 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:02.397 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:02.397 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:02.397 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:02.397 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:02.397 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:02.655 15:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:02.914 [ 00:21:02.914 { 00:21:02.914 "name": "BaseBdev3", 00:21:02.914 "aliases": [ 00:21:02.914 "9abe6cf7-2142-4d02-b0aa-b78ce9a9e57d" 00:21:02.914 ], 00:21:02.914 "product_name": "Malloc disk", 00:21:02.914 "block_size": 512, 00:21:02.914 "num_blocks": 65536, 00:21:02.914 "uuid": "9abe6cf7-2142-4d02-b0aa-b78ce9a9e57d", 00:21:02.914 "assigned_rate_limits": { 00:21:02.914 "rw_ios_per_sec": 0, 00:21:02.914 "rw_mbytes_per_sec": 0, 00:21:02.914 "r_mbytes_per_sec": 0, 00:21:02.914 "w_mbytes_per_sec": 0 00:21:02.914 }, 00:21:02.914 "claimed": true, 00:21:02.914 "claim_type": "exclusive_write", 00:21:02.914 "zoned": false, 00:21:02.914 "supported_io_types": { 00:21:02.914 "read": true, 00:21:02.914 "write": true, 00:21:02.914 "unmap": true, 00:21:02.914 "flush": true, 00:21:02.914 "reset": true, 00:21:02.914 "nvme_admin": false, 00:21:02.914 "nvme_io": false, 00:21:02.914 "nvme_io_md": false, 00:21:02.914 "write_zeroes": true, 00:21:02.914 "zcopy": true, 00:21:02.914 "get_zone_info": false, 00:21:02.914 "zone_management": false, 00:21:02.914 "zone_append": false, 00:21:02.914 "compare": false, 00:21:02.914 "compare_and_write": false, 00:21:02.914 "abort": true, 00:21:02.914 "seek_hole": false, 00:21:02.914 "seek_data": false, 00:21:02.914 "copy": true, 00:21:02.914 "nvme_iov_md": false 00:21:02.914 }, 00:21:02.914 "memory_domains": [ 00:21:02.914 { 00:21:02.914 "dma_device_id": "system", 00:21:02.914 "dma_device_type": 1 00:21:02.914 }, 00:21:02.914 { 00:21:02.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.914 "dma_device_type": 2 00:21:02.914 } 00:21:02.914 ], 00:21:02.914 "driver_specific": {} 00:21:02.914 } 00:21:02.914 ] 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.914 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.172 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.172 "name": "Existed_Raid", 00:21:03.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.172 "strip_size_kb": 64, 00:21:03.172 "state": "configuring", 00:21:03.172 "raid_level": "concat", 00:21:03.172 "superblock": false, 00:21:03.172 "num_base_bdevs": 4, 00:21:03.172 "num_base_bdevs_discovered": 3, 00:21:03.172 "num_base_bdevs_operational": 4, 00:21:03.172 "base_bdevs_list": [ 00:21:03.172 { 00:21:03.172 "name": "BaseBdev1", 00:21:03.172 "uuid": "3806521e-4253-479d-b527-a759c83bc342", 00:21:03.172 "is_configured": true, 00:21:03.172 "data_offset": 0, 00:21:03.172 "data_size": 65536 00:21:03.172 }, 00:21:03.172 { 00:21:03.172 "name": "BaseBdev2", 00:21:03.172 "uuid": "9c850779-2e6b-4c3d-8418-518696bafaeb", 00:21:03.172 "is_configured": true, 00:21:03.173 "data_offset": 0, 00:21:03.173 "data_size": 65536 00:21:03.173 }, 00:21:03.173 { 00:21:03.173 "name": "BaseBdev3", 00:21:03.173 "uuid": "9abe6cf7-2142-4d02-b0aa-b78ce9a9e57d", 00:21:03.173 "is_configured": true, 00:21:03.173 "data_offset": 0, 00:21:03.173 "data_size": 65536 00:21:03.173 }, 00:21:03.173 { 00:21:03.173 "name": "BaseBdev4", 00:21:03.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.173 "is_configured": false, 00:21:03.173 "data_offset": 0, 00:21:03.173 "data_size": 0 00:21:03.173 } 00:21:03.173 ] 00:21:03.173 }' 00:21:03.173 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.173 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.431 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:03.689 [2024-07-23 15:14:58.878779] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:03.689 [2024-07-23 15:14:58.879064] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:21:03.689 [2024-07-23 15:14:58.879110] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:03.689 [2024-07-23 15:14:58.879228] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:21:03.689 [2024-07-23 15:14:58.879591] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:21:03.689 [2024-07-23 15:14:58.879608] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:21:03.689 [2024-07-23 15:14:58.879821] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.689 BaseBdev4 00:21:03.689 15:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:03.689 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:03.689 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:03.689 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:03.689 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:03.689 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:03.689 15:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:03.948 [ 00:21:03.948 { 00:21:03.948 "name": "BaseBdev4", 00:21:03.948 "aliases": [ 00:21:03.948 "b50b6402-3b29-4651-b514-52730f27e3fd" 00:21:03.948 ], 00:21:03.948 "product_name": "Malloc disk", 00:21:03.948 "block_size": 512, 00:21:03.948 "num_blocks": 65536, 00:21:03.948 "uuid": "b50b6402-3b29-4651-b514-52730f27e3fd", 00:21:03.948 "assigned_rate_limits": { 00:21:03.948 "rw_ios_per_sec": 0, 00:21:03.948 "rw_mbytes_per_sec": 0, 00:21:03.948 "r_mbytes_per_sec": 0, 00:21:03.948 "w_mbytes_per_sec": 0 00:21:03.948 }, 00:21:03.948 "claimed": true, 00:21:03.948 "claim_type": "exclusive_write", 00:21:03.948 "zoned": false, 00:21:03.948 "supported_io_types": { 00:21:03.948 "read": true, 00:21:03.948 "write": true, 00:21:03.948 "unmap": true, 00:21:03.948 "flush": true, 00:21:03.948 "reset": true, 00:21:03.948 "nvme_admin": false, 00:21:03.948 "nvme_io": false, 00:21:03.948 "nvme_io_md": false, 00:21:03.948 "write_zeroes": true, 00:21:03.948 "zcopy": true, 00:21:03.948 "get_zone_info": false, 00:21:03.948 "zone_management": false, 00:21:03.948 "zone_append": false, 00:21:03.948 "compare": false, 00:21:03.948 "compare_and_write": false, 00:21:03.948 "abort": true, 00:21:03.948 "seek_hole": false, 00:21:03.948 "seek_data": false, 00:21:03.948 "copy": true, 00:21:03.948 "nvme_iov_md": false 00:21:03.948 }, 00:21:03.948 "memory_domains": [ 00:21:03.948 { 00:21:03.948 "dma_device_id": "system", 00:21:03.948 "dma_device_type": 1 00:21:03.948 }, 00:21:03.948 { 00:21:03.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.948 "dma_device_type": 2 00:21:03.948 } 00:21:03.948 ], 00:21:03.948 "driver_specific": {} 00:21:03.948 } 00:21:03.948 ] 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.948 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.207 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.207 "name": "Existed_Raid", 00:21:04.207 "uuid": "24048f05-95c0-47b4-a92e-87b29426c30f", 00:21:04.207 "strip_size_kb": 64, 00:21:04.207 "state": "online", 00:21:04.207 "raid_level": "concat", 00:21:04.207 "superblock": false, 00:21:04.207 "num_base_bdevs": 4, 00:21:04.207 "num_base_bdevs_discovered": 4, 00:21:04.207 "num_base_bdevs_operational": 4, 00:21:04.207 "base_bdevs_list": [ 00:21:04.207 { 00:21:04.207 "name": "BaseBdev1", 00:21:04.207 "uuid": "3806521e-4253-479d-b527-a759c83bc342", 00:21:04.207 "is_configured": true, 00:21:04.207 "data_offset": 0, 00:21:04.207 "data_size": 65536 00:21:04.207 }, 00:21:04.207 { 00:21:04.207 "name": "BaseBdev2", 00:21:04.207 "uuid": "9c850779-2e6b-4c3d-8418-518696bafaeb", 00:21:04.207 "is_configured": true, 00:21:04.207 "data_offset": 0, 00:21:04.207 "data_size": 65536 00:21:04.207 }, 00:21:04.207 { 00:21:04.207 "name": "BaseBdev3", 00:21:04.207 "uuid": "9abe6cf7-2142-4d02-b0aa-b78ce9a9e57d", 00:21:04.207 "is_configured": true, 00:21:04.207 "data_offset": 0, 00:21:04.207 "data_size": 65536 00:21:04.207 }, 00:21:04.207 { 00:21:04.207 "name": "BaseBdev4", 00:21:04.207 "uuid": "b50b6402-3b29-4651-b514-52730f27e3fd", 00:21:04.207 "is_configured": true, 00:21:04.207 "data_offset": 0, 00:21:04.207 "data_size": 65536 00:21:04.207 } 00:21:04.207 ] 00:21:04.207 }' 00:21:04.207 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.207 15:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.797 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:04.797 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:04.797 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:04.797 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:04.797 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:04.797 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:04.797 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:04.797 15:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:04.797 [2024-07-23 15:15:00.167497] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.797 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:04.797 "name": "Existed_Raid", 00:21:04.797 "aliases": [ 00:21:04.797 "24048f05-95c0-47b4-a92e-87b29426c30f" 00:21:04.797 ], 00:21:04.797 "product_name": "Raid Volume", 00:21:04.797 "block_size": 512, 00:21:04.797 "num_blocks": 262144, 00:21:04.797 "uuid": "24048f05-95c0-47b4-a92e-87b29426c30f", 00:21:04.797 "assigned_rate_limits": { 00:21:04.797 "rw_ios_per_sec": 0, 00:21:04.797 "rw_mbytes_per_sec": 0, 00:21:04.797 "r_mbytes_per_sec": 0, 00:21:04.797 "w_mbytes_per_sec": 0 00:21:04.797 }, 00:21:04.797 "claimed": false, 00:21:04.797 "zoned": false, 00:21:04.797 "supported_io_types": { 00:21:04.797 "read": true, 00:21:04.797 "write": true, 00:21:04.797 "unmap": true, 00:21:04.797 "flush": true, 00:21:04.797 "reset": true, 00:21:04.797 "nvme_admin": false, 00:21:04.797 "nvme_io": false, 00:21:04.797 "nvme_io_md": false, 00:21:04.797 "write_zeroes": true, 00:21:04.797 "zcopy": false, 00:21:04.797 "get_zone_info": false, 00:21:04.797 "zone_management": false, 00:21:04.797 "zone_append": false, 00:21:04.797 "compare": false, 00:21:04.797 "compare_and_write": false, 00:21:04.797 "abort": false, 00:21:04.797 "seek_hole": false, 00:21:04.797 "seek_data": false, 00:21:04.797 "copy": false, 00:21:04.797 "nvme_iov_md": false 00:21:04.797 }, 00:21:04.797 "memory_domains": [ 00:21:04.797 { 00:21:04.797 "dma_device_id": "system", 00:21:04.797 "dma_device_type": 1 00:21:04.797 }, 00:21:04.797 { 00:21:04.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.797 "dma_device_type": 2 00:21:04.797 }, 00:21:04.797 { 00:21:04.797 "dma_device_id": "system", 00:21:04.797 "dma_device_type": 1 00:21:04.797 }, 00:21:04.797 { 00:21:04.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.797 "dma_device_type": 2 00:21:04.797 }, 00:21:04.797 { 00:21:04.797 "dma_device_id": "system", 00:21:04.797 "dma_device_type": 1 00:21:04.797 }, 00:21:04.797 { 00:21:04.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.797 "dma_device_type": 2 00:21:04.797 }, 00:21:04.797 { 00:21:04.797 "dma_device_id": "system", 00:21:04.797 "dma_device_type": 1 00:21:04.797 }, 00:21:04.797 { 00:21:04.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.797 "dma_device_type": 2 00:21:04.797 } 00:21:04.797 ], 00:21:04.797 "driver_specific": { 00:21:04.797 "raid": { 00:21:04.797 "uuid": "24048f05-95c0-47b4-a92e-87b29426c30f", 00:21:04.797 "strip_size_kb": 64, 00:21:04.797 "state": "online", 00:21:04.797 "raid_level": "concat", 00:21:04.797 "superblock": false, 00:21:04.797 "num_base_bdevs": 4, 00:21:04.797 "num_base_bdevs_discovered": 4, 00:21:04.798 "num_base_bdevs_operational": 4, 00:21:04.798 "base_bdevs_list": [ 00:21:04.798 { 00:21:04.798 "name": "BaseBdev1", 00:21:04.798 "uuid": "3806521e-4253-479d-b527-a759c83bc342", 00:21:04.798 "is_configured": true, 00:21:04.798 "data_offset": 0, 00:21:04.798 "data_size": 65536 00:21:04.798 }, 00:21:04.798 { 00:21:04.798 "name": "BaseBdev2", 00:21:04.798 "uuid": "9c850779-2e6b-4c3d-8418-518696bafaeb", 00:21:04.798 "is_configured": true, 00:21:04.798 "data_offset": 0, 00:21:04.798 "data_size": 65536 00:21:04.798 }, 00:21:04.798 { 00:21:04.798 "name": "BaseBdev3", 00:21:04.798 "uuid": "9abe6cf7-2142-4d02-b0aa-b78ce9a9e57d", 00:21:04.798 "is_configured": true, 00:21:04.798 "data_offset": 0, 00:21:04.798 "data_size": 65536 00:21:04.798 }, 00:21:04.798 { 00:21:04.798 "name": "BaseBdev4", 00:21:04.798 "uuid": "b50b6402-3b29-4651-b514-52730f27e3fd", 00:21:04.798 "is_configured": true, 00:21:04.798 "data_offset": 0, 00:21:04.798 "data_size": 65536 00:21:04.798 } 00:21:04.798 ] 00:21:04.798 } 00:21:04.798 } 00:21:04.798 }' 00:21:04.798 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:04.798 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:04.798 BaseBdev2 00:21:04.798 BaseBdev3 00:21:04.798 BaseBdev4' 00:21:04.798 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.798 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:04.798 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.057 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.057 "name": "BaseBdev1", 00:21:05.057 "aliases": [ 00:21:05.057 "3806521e-4253-479d-b527-a759c83bc342" 00:21:05.057 ], 00:21:05.057 "product_name": "Malloc disk", 00:21:05.057 "block_size": 512, 00:21:05.057 "num_blocks": 65536, 00:21:05.057 "uuid": "3806521e-4253-479d-b527-a759c83bc342", 00:21:05.057 "assigned_rate_limits": { 00:21:05.057 "rw_ios_per_sec": 0, 00:21:05.057 "rw_mbytes_per_sec": 0, 00:21:05.057 "r_mbytes_per_sec": 0, 00:21:05.057 "w_mbytes_per_sec": 0 00:21:05.057 }, 00:21:05.057 "claimed": true, 00:21:05.057 "claim_type": "exclusive_write", 00:21:05.057 "zoned": false, 00:21:05.057 "supported_io_types": { 00:21:05.057 "read": true, 00:21:05.057 "write": true, 00:21:05.057 "unmap": true, 00:21:05.057 "flush": true, 00:21:05.057 "reset": true, 00:21:05.057 "nvme_admin": false, 00:21:05.057 "nvme_io": false, 00:21:05.057 "nvme_io_md": false, 00:21:05.057 "write_zeroes": true, 00:21:05.057 "zcopy": true, 00:21:05.057 "get_zone_info": false, 00:21:05.057 "zone_management": false, 00:21:05.057 "zone_append": false, 00:21:05.057 "compare": false, 00:21:05.057 "compare_and_write": false, 00:21:05.057 "abort": true, 00:21:05.057 "seek_hole": false, 00:21:05.057 "seek_data": false, 00:21:05.057 "copy": true, 00:21:05.057 "nvme_iov_md": false 00:21:05.057 }, 00:21:05.057 "memory_domains": [ 00:21:05.057 { 00:21:05.057 "dma_device_id": "system", 00:21:05.057 "dma_device_type": 1 00:21:05.057 }, 00:21:05.057 { 00:21:05.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.057 "dma_device_type": 2 00:21:05.057 } 00:21:05.057 ], 00:21:05.057 "driver_specific": {} 00:21:05.057 }' 00:21:05.057 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:05.317 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.576 "name": "BaseBdev2", 00:21:05.576 "aliases": [ 00:21:05.576 "9c850779-2e6b-4c3d-8418-518696bafaeb" 00:21:05.576 ], 00:21:05.576 "product_name": "Malloc disk", 00:21:05.576 "block_size": 512, 00:21:05.576 "num_blocks": 65536, 00:21:05.576 "uuid": "9c850779-2e6b-4c3d-8418-518696bafaeb", 00:21:05.576 "assigned_rate_limits": { 00:21:05.576 "rw_ios_per_sec": 0, 00:21:05.576 "rw_mbytes_per_sec": 0, 00:21:05.576 "r_mbytes_per_sec": 0, 00:21:05.576 "w_mbytes_per_sec": 0 00:21:05.576 }, 00:21:05.576 "claimed": true, 00:21:05.576 "claim_type": "exclusive_write", 00:21:05.576 "zoned": false, 00:21:05.576 "supported_io_types": { 00:21:05.576 "read": true, 00:21:05.576 "write": true, 00:21:05.576 "unmap": true, 00:21:05.576 "flush": true, 00:21:05.576 "reset": true, 00:21:05.576 "nvme_admin": false, 00:21:05.576 "nvme_io": false, 00:21:05.576 "nvme_io_md": false, 00:21:05.576 "write_zeroes": true, 00:21:05.576 "zcopy": true, 00:21:05.576 "get_zone_info": false, 00:21:05.576 "zone_management": false, 00:21:05.576 "zone_append": false, 00:21:05.576 "compare": false, 00:21:05.576 "compare_and_write": false, 00:21:05.576 "abort": true, 00:21:05.576 "seek_hole": false, 00:21:05.576 "seek_data": false, 00:21:05.576 "copy": true, 00:21:05.576 "nvme_iov_md": false 00:21:05.576 }, 00:21:05.576 "memory_domains": [ 00:21:05.576 { 00:21:05.576 "dma_device_id": "system", 00:21:05.576 "dma_device_type": 1 00:21:05.576 }, 00:21:05.576 { 00:21:05.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.576 "dma_device_type": 2 00:21:05.576 } 00:21:05.576 ], 00:21:05.576 "driver_specific": {} 00:21:05.576 }' 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:05.576 15:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.835 "name": "BaseBdev3", 00:21:05.835 "aliases": [ 00:21:05.835 "9abe6cf7-2142-4d02-b0aa-b78ce9a9e57d" 00:21:05.835 ], 00:21:05.835 "product_name": "Malloc disk", 00:21:05.835 "block_size": 512, 00:21:05.835 "num_blocks": 65536, 00:21:05.835 "uuid": "9abe6cf7-2142-4d02-b0aa-b78ce9a9e57d", 00:21:05.835 "assigned_rate_limits": { 00:21:05.835 "rw_ios_per_sec": 0, 00:21:05.835 "rw_mbytes_per_sec": 0, 00:21:05.835 "r_mbytes_per_sec": 0, 00:21:05.835 "w_mbytes_per_sec": 0 00:21:05.835 }, 00:21:05.835 "claimed": true, 00:21:05.835 "claim_type": "exclusive_write", 00:21:05.835 "zoned": false, 00:21:05.835 "supported_io_types": { 00:21:05.835 "read": true, 00:21:05.835 "write": true, 00:21:05.835 "unmap": true, 00:21:05.835 "flush": true, 00:21:05.835 "reset": true, 00:21:05.835 "nvme_admin": false, 00:21:05.835 "nvme_io": false, 00:21:05.835 "nvme_io_md": false, 00:21:05.835 "write_zeroes": true, 00:21:05.835 "zcopy": true, 00:21:05.835 "get_zone_info": false, 00:21:05.835 "zone_management": false, 00:21:05.835 "zone_append": false, 00:21:05.835 "compare": false, 00:21:05.835 "compare_and_write": false, 00:21:05.835 "abort": true, 00:21:05.835 "seek_hole": false, 00:21:05.835 "seek_data": false, 00:21:05.835 "copy": true, 00:21:05.835 "nvme_iov_md": false 00:21:05.835 }, 00:21:05.835 "memory_domains": [ 00:21:05.835 { 00:21:05.835 "dma_device_id": "system", 00:21:05.835 "dma_device_type": 1 00:21:05.835 }, 00:21:05.835 { 00:21:05.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.835 "dma_device_type": 2 00:21:05.835 } 00:21:05.835 ], 00:21:05.835 "driver_specific": {} 00:21:05.835 }' 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.835 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.094 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.095 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.095 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.095 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:06.095 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:06.095 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:06.354 "name": "BaseBdev4", 00:21:06.354 "aliases": [ 00:21:06.354 "b50b6402-3b29-4651-b514-52730f27e3fd" 00:21:06.354 ], 00:21:06.354 "product_name": "Malloc disk", 00:21:06.354 "block_size": 512, 00:21:06.354 "num_blocks": 65536, 00:21:06.354 "uuid": "b50b6402-3b29-4651-b514-52730f27e3fd", 00:21:06.354 "assigned_rate_limits": { 00:21:06.354 "rw_ios_per_sec": 0, 00:21:06.354 "rw_mbytes_per_sec": 0, 00:21:06.354 "r_mbytes_per_sec": 0, 00:21:06.354 "w_mbytes_per_sec": 0 00:21:06.354 }, 00:21:06.354 "claimed": true, 00:21:06.354 "claim_type": "exclusive_write", 00:21:06.354 "zoned": false, 00:21:06.354 "supported_io_types": { 00:21:06.354 "read": true, 00:21:06.354 "write": true, 00:21:06.354 "unmap": true, 00:21:06.354 "flush": true, 00:21:06.354 "reset": true, 00:21:06.354 "nvme_admin": false, 00:21:06.354 "nvme_io": false, 00:21:06.354 "nvme_io_md": false, 00:21:06.354 "write_zeroes": true, 00:21:06.354 "zcopy": true, 00:21:06.354 "get_zone_info": false, 00:21:06.354 "zone_management": false, 00:21:06.354 "zone_append": false, 00:21:06.354 "compare": false, 00:21:06.354 "compare_and_write": false, 00:21:06.354 "abort": true, 00:21:06.354 "seek_hole": false, 00:21:06.354 "seek_data": false, 00:21:06.354 "copy": true, 00:21:06.354 "nvme_iov_md": false 00:21:06.354 }, 00:21:06.354 "memory_domains": [ 00:21:06.354 { 00:21:06.354 "dma_device_id": "system", 00:21:06.354 "dma_device_type": 1 00:21:06.354 }, 00:21:06.354 { 00:21:06.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.354 "dma_device_type": 2 00:21:06.354 } 00:21:06.354 ], 00:21:06.354 "driver_specific": {} 00:21:06.354 }' 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.354 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:06.613 [2024-07-23 15:15:01.831638] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:06.613 [2024-07-23 15:15:01.831691] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.613 [2024-07-23 15:15:01.831760] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.613 15:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.872 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.872 "name": "Existed_Raid", 00:21:06.872 "uuid": "24048f05-95c0-47b4-a92e-87b29426c30f", 00:21:06.872 "strip_size_kb": 64, 00:21:06.872 "state": "offline", 00:21:06.872 "raid_level": "concat", 00:21:06.872 "superblock": false, 00:21:06.872 "num_base_bdevs": 4, 00:21:06.872 "num_base_bdevs_discovered": 3, 00:21:06.872 "num_base_bdevs_operational": 3, 00:21:06.872 "base_bdevs_list": [ 00:21:06.872 { 00:21:06.872 "name": null, 00:21:06.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.872 "is_configured": false, 00:21:06.872 "data_offset": 0, 00:21:06.872 "data_size": 65536 00:21:06.872 }, 00:21:06.872 { 00:21:06.872 "name": "BaseBdev2", 00:21:06.872 "uuid": "9c850779-2e6b-4c3d-8418-518696bafaeb", 00:21:06.872 "is_configured": true, 00:21:06.872 "data_offset": 0, 00:21:06.872 "data_size": 65536 00:21:06.872 }, 00:21:06.872 { 00:21:06.872 "name": "BaseBdev3", 00:21:06.872 "uuid": "9abe6cf7-2142-4d02-b0aa-b78ce9a9e57d", 00:21:06.872 "is_configured": true, 00:21:06.872 "data_offset": 0, 00:21:06.872 "data_size": 65536 00:21:06.872 }, 00:21:06.872 { 00:21:06.872 "name": "BaseBdev4", 00:21:06.872 "uuid": "b50b6402-3b29-4651-b514-52730f27e3fd", 00:21:06.872 "is_configured": true, 00:21:06.872 "data_offset": 0, 00:21:06.872 "data_size": 65536 00:21:06.872 } 00:21:06.872 ] 00:21:06.872 }' 00:21:06.872 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.872 15:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.131 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:07.131 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:07.131 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.131 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:07.391 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:07.391 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:07.391 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:07.649 [2024-07-23 15:15:02.956694] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:07.649 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:07.649 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:07.649 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.649 15:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:07.907 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:07.907 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:07.907 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:08.166 [2024-07-23 15:15:03.393320] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:08.166 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:08.166 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:08.166 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.166 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:08.425 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:08.425 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:08.425 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:08.425 [2024-07-23 15:15:03.765894] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:08.425 [2024-07-23 15:15:03.765964] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:21:08.425 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:08.425 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:08.425 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:08.425 15:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.683 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:08.683 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:08.683 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:08.683 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:08.683 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:08.683 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:08.942 BaseBdev2 00:21:08.942 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:08.942 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:08.942 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:08.942 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:08.942 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:08.942 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:08.942 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:09.199 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:09.457 [ 00:21:09.457 { 00:21:09.457 "name": "BaseBdev2", 00:21:09.457 "aliases": [ 00:21:09.457 "00d67b44-e8ba-46ca-9396-eb8aa57c9177" 00:21:09.457 ], 00:21:09.457 "product_name": "Malloc disk", 00:21:09.457 "block_size": 512, 00:21:09.457 "num_blocks": 65536, 00:21:09.457 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:09.457 "assigned_rate_limits": { 00:21:09.457 "rw_ios_per_sec": 0, 00:21:09.457 "rw_mbytes_per_sec": 0, 00:21:09.457 "r_mbytes_per_sec": 0, 00:21:09.457 "w_mbytes_per_sec": 0 00:21:09.457 }, 00:21:09.457 "claimed": false, 00:21:09.457 "zoned": false, 00:21:09.457 "supported_io_types": { 00:21:09.457 "read": true, 00:21:09.457 "write": true, 00:21:09.457 "unmap": true, 00:21:09.457 "flush": true, 00:21:09.457 "reset": true, 00:21:09.457 "nvme_admin": false, 00:21:09.457 "nvme_io": false, 00:21:09.458 "nvme_io_md": false, 00:21:09.458 "write_zeroes": true, 00:21:09.458 "zcopy": true, 00:21:09.458 "get_zone_info": false, 00:21:09.458 "zone_management": false, 00:21:09.458 "zone_append": false, 00:21:09.458 "compare": false, 00:21:09.458 "compare_and_write": false, 00:21:09.458 "abort": true, 00:21:09.458 "seek_hole": false, 00:21:09.458 "seek_data": false, 00:21:09.458 "copy": true, 00:21:09.458 "nvme_iov_md": false 00:21:09.458 }, 00:21:09.458 "memory_domains": [ 00:21:09.458 { 00:21:09.458 "dma_device_id": "system", 00:21:09.458 "dma_device_type": 1 00:21:09.458 }, 00:21:09.458 { 00:21:09.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.458 "dma_device_type": 2 00:21:09.458 } 00:21:09.458 ], 00:21:09.458 "driver_specific": {} 00:21:09.458 } 00:21:09.458 ] 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:09.458 BaseBdev3 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:09.458 15:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:09.716 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:09.974 [ 00:21:09.974 { 00:21:09.974 "name": "BaseBdev3", 00:21:09.974 "aliases": [ 00:21:09.974 "cec48545-b0ad-4817-bc5a-5b0d14afcd72" 00:21:09.974 ], 00:21:09.974 "product_name": "Malloc disk", 00:21:09.974 "block_size": 512, 00:21:09.974 "num_blocks": 65536, 00:21:09.974 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:09.974 "assigned_rate_limits": { 00:21:09.974 "rw_ios_per_sec": 0, 00:21:09.974 "rw_mbytes_per_sec": 0, 00:21:09.974 "r_mbytes_per_sec": 0, 00:21:09.974 "w_mbytes_per_sec": 0 00:21:09.974 }, 00:21:09.974 "claimed": false, 00:21:09.974 "zoned": false, 00:21:09.974 "supported_io_types": { 00:21:09.974 "read": true, 00:21:09.974 "write": true, 00:21:09.974 "unmap": true, 00:21:09.974 "flush": true, 00:21:09.974 "reset": true, 00:21:09.974 "nvme_admin": false, 00:21:09.974 "nvme_io": false, 00:21:09.974 "nvme_io_md": false, 00:21:09.974 "write_zeroes": true, 00:21:09.974 "zcopy": true, 00:21:09.974 "get_zone_info": false, 00:21:09.974 "zone_management": false, 00:21:09.974 "zone_append": false, 00:21:09.974 "compare": false, 00:21:09.974 "compare_and_write": false, 00:21:09.974 "abort": true, 00:21:09.974 "seek_hole": false, 00:21:09.974 "seek_data": false, 00:21:09.974 "copy": true, 00:21:09.974 "nvme_iov_md": false 00:21:09.974 }, 00:21:09.974 "memory_domains": [ 00:21:09.974 { 00:21:09.974 "dma_device_id": "system", 00:21:09.974 "dma_device_type": 1 00:21:09.974 }, 00:21:09.974 { 00:21:09.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.974 "dma_device_type": 2 00:21:09.974 } 00:21:09.974 ], 00:21:09.974 "driver_specific": {} 00:21:09.974 } 00:21:09.974 ] 00:21:09.974 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:09.974 15:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:09.974 15:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:09.974 15:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:09.974 BaseBdev4 00:21:10.232 15:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:10.232 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:10.232 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:10.232 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:10.232 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:10.232 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:10.232 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:10.232 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:10.490 [ 00:21:10.490 { 00:21:10.490 "name": "BaseBdev4", 00:21:10.490 "aliases": [ 00:21:10.490 "6af4513b-fbbf-4e8d-8d85-69ab8b302763" 00:21:10.490 ], 00:21:10.490 "product_name": "Malloc disk", 00:21:10.490 "block_size": 512, 00:21:10.490 "num_blocks": 65536, 00:21:10.490 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:10.490 "assigned_rate_limits": { 00:21:10.490 "rw_ios_per_sec": 0, 00:21:10.490 "rw_mbytes_per_sec": 0, 00:21:10.490 "r_mbytes_per_sec": 0, 00:21:10.490 "w_mbytes_per_sec": 0 00:21:10.490 }, 00:21:10.490 "claimed": false, 00:21:10.490 "zoned": false, 00:21:10.490 "supported_io_types": { 00:21:10.490 "read": true, 00:21:10.490 "write": true, 00:21:10.490 "unmap": true, 00:21:10.490 "flush": true, 00:21:10.490 "reset": true, 00:21:10.490 "nvme_admin": false, 00:21:10.490 "nvme_io": false, 00:21:10.490 "nvme_io_md": false, 00:21:10.490 "write_zeroes": true, 00:21:10.490 "zcopy": true, 00:21:10.490 "get_zone_info": false, 00:21:10.490 "zone_management": false, 00:21:10.490 "zone_append": false, 00:21:10.490 "compare": false, 00:21:10.490 "compare_and_write": false, 00:21:10.490 "abort": true, 00:21:10.490 "seek_hole": false, 00:21:10.490 "seek_data": false, 00:21:10.490 "copy": true, 00:21:10.490 "nvme_iov_md": false 00:21:10.490 }, 00:21:10.490 "memory_domains": [ 00:21:10.490 { 00:21:10.490 "dma_device_id": "system", 00:21:10.490 "dma_device_type": 1 00:21:10.490 }, 00:21:10.490 { 00:21:10.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.491 "dma_device_type": 2 00:21:10.491 } 00:21:10.491 ], 00:21:10.491 "driver_specific": {} 00:21:10.491 } 00:21:10.491 ] 00:21:10.491 15:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:10.491 15:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:10.491 15:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:10.491 15:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:10.749 [2024-07-23 15:15:05.995308] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:10.749 [2024-07-23 15:15:05.995554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:10.749 [2024-07-23 15:15:05.995605] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:10.749 [2024-07-23 15:15:05.997764] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:10.749 [2024-07-23 15:15:05.997834] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.749 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.007 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.007 "name": "Existed_Raid", 00:21:11.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.007 "strip_size_kb": 64, 00:21:11.007 "state": "configuring", 00:21:11.007 "raid_level": "concat", 00:21:11.007 "superblock": false, 00:21:11.007 "num_base_bdevs": 4, 00:21:11.007 "num_base_bdevs_discovered": 3, 00:21:11.007 "num_base_bdevs_operational": 4, 00:21:11.007 "base_bdevs_list": [ 00:21:11.007 { 00:21:11.007 "name": "BaseBdev1", 00:21:11.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.007 "is_configured": false, 00:21:11.007 "data_offset": 0, 00:21:11.007 "data_size": 0 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "name": "BaseBdev2", 00:21:11.007 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:11.007 "is_configured": true, 00:21:11.007 "data_offset": 0, 00:21:11.007 "data_size": 65536 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "name": "BaseBdev3", 00:21:11.007 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:11.007 "is_configured": true, 00:21:11.007 "data_offset": 0, 00:21:11.007 "data_size": 65536 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "name": "BaseBdev4", 00:21:11.007 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:11.007 "is_configured": true, 00:21:11.007 "data_offset": 0, 00:21:11.007 "data_size": 65536 00:21:11.007 } 00:21:11.007 ] 00:21:11.007 }' 00:21:11.007 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.007 15:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.264 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:11.522 [2024-07-23 15:15:06.787437] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.522 15:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.781 15:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.781 "name": "Existed_Raid", 00:21:11.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.781 "strip_size_kb": 64, 00:21:11.781 "state": "configuring", 00:21:11.781 "raid_level": "concat", 00:21:11.781 "superblock": false, 00:21:11.781 "num_base_bdevs": 4, 00:21:11.781 "num_base_bdevs_discovered": 2, 00:21:11.781 "num_base_bdevs_operational": 4, 00:21:11.781 "base_bdevs_list": [ 00:21:11.781 { 00:21:11.781 "name": "BaseBdev1", 00:21:11.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.781 "is_configured": false, 00:21:11.781 "data_offset": 0, 00:21:11.781 "data_size": 0 00:21:11.781 }, 00:21:11.781 { 00:21:11.781 "name": null, 00:21:11.781 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:11.781 "is_configured": false, 00:21:11.781 "data_offset": 0, 00:21:11.781 "data_size": 65536 00:21:11.781 }, 00:21:11.781 { 00:21:11.781 "name": "BaseBdev3", 00:21:11.781 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:11.781 "is_configured": true, 00:21:11.781 "data_offset": 0, 00:21:11.781 "data_size": 65536 00:21:11.781 }, 00:21:11.781 { 00:21:11.781 "name": "BaseBdev4", 00:21:11.781 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:11.781 "is_configured": true, 00:21:11.781 "data_offset": 0, 00:21:11.781 "data_size": 65536 00:21:11.781 } 00:21:11.781 ] 00:21:11.781 }' 00:21:11.781 15:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.781 15:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.060 15:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:12.060 15:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.332 15:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:12.332 15:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:12.588 BaseBdev1 00:21:12.588 [2024-07-23 15:15:07.826963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.588 15:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:12.588 15:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:12.588 15:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:12.588 15:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:12.588 15:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:12.588 15:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:12.588 15:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:12.588 15:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:12.845 [ 00:21:12.845 { 00:21:12.845 "name": "BaseBdev1", 00:21:12.845 "aliases": [ 00:21:12.845 "ed623265-9d36-47f7-a59a-ad103eda93c7" 00:21:12.845 ], 00:21:12.845 "product_name": "Malloc disk", 00:21:12.845 "block_size": 512, 00:21:12.845 "num_blocks": 65536, 00:21:12.845 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:12.845 "assigned_rate_limits": { 00:21:12.845 "rw_ios_per_sec": 0, 00:21:12.845 "rw_mbytes_per_sec": 0, 00:21:12.845 "r_mbytes_per_sec": 0, 00:21:12.845 "w_mbytes_per_sec": 0 00:21:12.845 }, 00:21:12.845 "claimed": true, 00:21:12.845 "claim_type": "exclusive_write", 00:21:12.845 "zoned": false, 00:21:12.845 "supported_io_types": { 00:21:12.845 "read": true, 00:21:12.845 "write": true, 00:21:12.845 "unmap": true, 00:21:12.845 "flush": true, 00:21:12.845 "reset": true, 00:21:12.845 "nvme_admin": false, 00:21:12.845 "nvme_io": false, 00:21:12.845 "nvme_io_md": false, 00:21:12.845 "write_zeroes": true, 00:21:12.845 "zcopy": true, 00:21:12.845 "get_zone_info": false, 00:21:12.845 "zone_management": false, 00:21:12.845 "zone_append": false, 00:21:12.845 "compare": false, 00:21:12.845 "compare_and_write": false, 00:21:12.845 "abort": true, 00:21:12.845 "seek_hole": false, 00:21:12.845 "seek_data": false, 00:21:12.845 "copy": true, 00:21:12.845 "nvme_iov_md": false 00:21:12.845 }, 00:21:12.845 "memory_domains": [ 00:21:12.845 { 00:21:12.845 "dma_device_id": "system", 00:21:12.845 "dma_device_type": 1 00:21:12.845 }, 00:21:12.845 { 00:21:12.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.845 "dma_device_type": 2 00:21:12.845 } 00:21:12.845 ], 00:21:12.845 "driver_specific": {} 00:21:12.845 } 00:21:12.845 ] 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.845 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.102 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.102 "name": "Existed_Raid", 00:21:13.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.102 "strip_size_kb": 64, 00:21:13.102 "state": "configuring", 00:21:13.102 "raid_level": "concat", 00:21:13.102 "superblock": false, 00:21:13.102 "num_base_bdevs": 4, 00:21:13.102 "num_base_bdevs_discovered": 3, 00:21:13.102 "num_base_bdevs_operational": 4, 00:21:13.102 "base_bdevs_list": [ 00:21:13.102 { 00:21:13.102 "name": "BaseBdev1", 00:21:13.103 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:13.103 "is_configured": true, 00:21:13.103 "data_offset": 0, 00:21:13.103 "data_size": 65536 00:21:13.103 }, 00:21:13.103 { 00:21:13.103 "name": null, 00:21:13.103 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:13.103 "is_configured": false, 00:21:13.103 "data_offset": 0, 00:21:13.103 "data_size": 65536 00:21:13.103 }, 00:21:13.103 { 00:21:13.103 "name": "BaseBdev3", 00:21:13.103 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:13.103 "is_configured": true, 00:21:13.103 "data_offset": 0, 00:21:13.103 "data_size": 65536 00:21:13.103 }, 00:21:13.103 { 00:21:13.103 "name": "BaseBdev4", 00:21:13.103 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:13.103 "is_configured": true, 00:21:13.103 "data_offset": 0, 00:21:13.103 "data_size": 65536 00:21:13.103 } 00:21:13.103 ] 00:21:13.103 }' 00:21:13.103 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.103 15:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.361 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.361 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:13.618 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:13.618 15:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:13.877 [2024-07-23 15:15:09.179341] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.877 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.135 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.135 "name": "Existed_Raid", 00:21:14.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.135 "strip_size_kb": 64, 00:21:14.135 "state": "configuring", 00:21:14.135 "raid_level": "concat", 00:21:14.135 "superblock": false, 00:21:14.135 "num_base_bdevs": 4, 00:21:14.135 "num_base_bdevs_discovered": 2, 00:21:14.135 "num_base_bdevs_operational": 4, 00:21:14.135 "base_bdevs_list": [ 00:21:14.135 { 00:21:14.135 "name": "BaseBdev1", 00:21:14.135 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:14.135 "is_configured": true, 00:21:14.135 "data_offset": 0, 00:21:14.135 "data_size": 65536 00:21:14.135 }, 00:21:14.135 { 00:21:14.135 "name": null, 00:21:14.135 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:14.135 "is_configured": false, 00:21:14.135 "data_offset": 0, 00:21:14.135 "data_size": 65536 00:21:14.135 }, 00:21:14.135 { 00:21:14.135 "name": null, 00:21:14.135 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:14.135 "is_configured": false, 00:21:14.135 "data_offset": 0, 00:21:14.135 "data_size": 65536 00:21:14.135 }, 00:21:14.135 { 00:21:14.135 "name": "BaseBdev4", 00:21:14.135 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:14.135 "is_configured": true, 00:21:14.135 "data_offset": 0, 00:21:14.135 "data_size": 65536 00:21:14.135 } 00:21:14.135 ] 00:21:14.135 }' 00:21:14.135 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.135 15:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.393 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.393 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:14.651 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:14.651 15:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:14.909 [2024-07-23 15:15:10.147580] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.909 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.167 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.167 "name": "Existed_Raid", 00:21:15.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.167 "strip_size_kb": 64, 00:21:15.167 "state": "configuring", 00:21:15.167 "raid_level": "concat", 00:21:15.167 "superblock": false, 00:21:15.167 "num_base_bdevs": 4, 00:21:15.167 "num_base_bdevs_discovered": 3, 00:21:15.167 "num_base_bdevs_operational": 4, 00:21:15.167 "base_bdevs_list": [ 00:21:15.167 { 00:21:15.167 "name": "BaseBdev1", 00:21:15.167 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:15.167 "is_configured": true, 00:21:15.167 "data_offset": 0, 00:21:15.167 "data_size": 65536 00:21:15.167 }, 00:21:15.167 { 00:21:15.167 "name": null, 00:21:15.167 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:15.167 "is_configured": false, 00:21:15.167 "data_offset": 0, 00:21:15.167 "data_size": 65536 00:21:15.167 }, 00:21:15.167 { 00:21:15.167 "name": "BaseBdev3", 00:21:15.167 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:15.167 "is_configured": true, 00:21:15.167 "data_offset": 0, 00:21:15.167 "data_size": 65536 00:21:15.167 }, 00:21:15.167 { 00:21:15.167 "name": "BaseBdev4", 00:21:15.167 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:15.167 "is_configured": true, 00:21:15.167 "data_offset": 0, 00:21:15.167 "data_size": 65536 00:21:15.167 } 00:21:15.167 ] 00:21:15.167 }' 00:21:15.167 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.167 15:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.425 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:15.425 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.683 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:15.683 15:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:15.941 [2024-07-23 15:15:11.179868] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.941 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.198 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.198 "name": "Existed_Raid", 00:21:16.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.198 "strip_size_kb": 64, 00:21:16.198 "state": "configuring", 00:21:16.198 "raid_level": "concat", 00:21:16.198 "superblock": false, 00:21:16.198 "num_base_bdevs": 4, 00:21:16.198 "num_base_bdevs_discovered": 2, 00:21:16.198 "num_base_bdevs_operational": 4, 00:21:16.198 "base_bdevs_list": [ 00:21:16.198 { 00:21:16.198 "name": null, 00:21:16.198 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:16.199 "is_configured": false, 00:21:16.199 "data_offset": 0, 00:21:16.199 "data_size": 65536 00:21:16.199 }, 00:21:16.199 { 00:21:16.199 "name": null, 00:21:16.199 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:16.199 "is_configured": false, 00:21:16.199 "data_offset": 0, 00:21:16.199 "data_size": 65536 00:21:16.199 }, 00:21:16.199 { 00:21:16.199 "name": "BaseBdev3", 00:21:16.199 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:16.199 "is_configured": true, 00:21:16.199 "data_offset": 0, 00:21:16.199 "data_size": 65536 00:21:16.199 }, 00:21:16.199 { 00:21:16.199 "name": "BaseBdev4", 00:21:16.199 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:16.199 "is_configured": true, 00:21:16.199 "data_offset": 0, 00:21:16.199 "data_size": 65536 00:21:16.199 } 00:21:16.199 ] 00:21:16.199 }' 00:21:16.199 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.199 15:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.457 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.457 15:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:16.714 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:16.714 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:16.972 [2024-07-23 15:15:12.264438] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.972 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.230 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.230 "name": "Existed_Raid", 00:21:17.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.230 "strip_size_kb": 64, 00:21:17.230 "state": "configuring", 00:21:17.230 "raid_level": "concat", 00:21:17.230 "superblock": false, 00:21:17.230 "num_base_bdevs": 4, 00:21:17.230 "num_base_bdevs_discovered": 3, 00:21:17.230 "num_base_bdevs_operational": 4, 00:21:17.230 "base_bdevs_list": [ 00:21:17.230 { 00:21:17.230 "name": null, 00:21:17.230 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:17.230 "is_configured": false, 00:21:17.230 "data_offset": 0, 00:21:17.230 "data_size": 65536 00:21:17.230 }, 00:21:17.230 { 00:21:17.230 "name": "BaseBdev2", 00:21:17.230 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:17.230 "is_configured": true, 00:21:17.230 "data_offset": 0, 00:21:17.230 "data_size": 65536 00:21:17.230 }, 00:21:17.230 { 00:21:17.230 "name": "BaseBdev3", 00:21:17.230 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:17.230 "is_configured": true, 00:21:17.230 "data_offset": 0, 00:21:17.230 "data_size": 65536 00:21:17.230 }, 00:21:17.230 { 00:21:17.230 "name": "BaseBdev4", 00:21:17.230 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:17.230 "is_configured": true, 00:21:17.230 "data_offset": 0, 00:21:17.230 "data_size": 65536 00:21:17.230 } 00:21:17.230 ] 00:21:17.230 }' 00:21:17.230 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.230 15:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.499 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.499 15:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:17.774 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:17.774 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:17.774 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.032 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ed623265-9d36-47f7-a59a-ad103eda93c7 00:21:18.290 [2024-07-23 15:15:13.511578] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:18.290 [2024-07-23 15:15:13.511631] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:21:18.290 [2024-07-23 15:15:13.511641] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:18.290 [2024-07-23 15:15:13.511732] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:21:18.290 [2024-07-23 15:15:13.512020] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:21:18.290 [2024-07-23 15:15:13.512036] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008180 00:21:18.290 [2024-07-23 15:15:13.512215] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.290 NewBaseBdev 00:21:18.290 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:18.290 15:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:18.290 15:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:18.290 15:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:18.290 15:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:18.290 15:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:18.290 15:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:18.290 15:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:18.549 [ 00:21:18.549 { 00:21:18.549 "name": "NewBaseBdev", 00:21:18.549 "aliases": [ 00:21:18.549 "ed623265-9d36-47f7-a59a-ad103eda93c7" 00:21:18.549 ], 00:21:18.549 "product_name": "Malloc disk", 00:21:18.549 "block_size": 512, 00:21:18.549 "num_blocks": 65536, 00:21:18.549 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:18.549 "assigned_rate_limits": { 00:21:18.549 "rw_ios_per_sec": 0, 00:21:18.549 "rw_mbytes_per_sec": 0, 00:21:18.549 "r_mbytes_per_sec": 0, 00:21:18.549 "w_mbytes_per_sec": 0 00:21:18.549 }, 00:21:18.549 "claimed": true, 00:21:18.549 "claim_type": "exclusive_write", 00:21:18.549 "zoned": false, 00:21:18.549 "supported_io_types": { 00:21:18.549 "read": true, 00:21:18.549 "write": true, 00:21:18.549 "unmap": true, 00:21:18.549 "flush": true, 00:21:18.549 "reset": true, 00:21:18.549 "nvme_admin": false, 00:21:18.549 "nvme_io": false, 00:21:18.549 "nvme_io_md": false, 00:21:18.549 "write_zeroes": true, 00:21:18.549 "zcopy": true, 00:21:18.549 "get_zone_info": false, 00:21:18.549 "zone_management": false, 00:21:18.549 "zone_append": false, 00:21:18.549 "compare": false, 00:21:18.549 "compare_and_write": false, 00:21:18.549 "abort": true, 00:21:18.549 "seek_hole": false, 00:21:18.549 "seek_data": false, 00:21:18.549 "copy": true, 00:21:18.549 "nvme_iov_md": false 00:21:18.549 }, 00:21:18.549 "memory_domains": [ 00:21:18.549 { 00:21:18.549 "dma_device_id": "system", 00:21:18.549 "dma_device_type": 1 00:21:18.549 }, 00:21:18.549 { 00:21:18.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.549 "dma_device_type": 2 00:21:18.549 } 00:21:18.549 ], 00:21:18.549 "driver_specific": {} 00:21:18.549 } 00:21:18.549 ] 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.549 15:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.807 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.807 "name": "Existed_Raid", 00:21:18.807 "uuid": "52ccef87-cbb5-46bb-a653-c9787822fbf8", 00:21:18.807 "strip_size_kb": 64, 00:21:18.807 "state": "online", 00:21:18.807 "raid_level": "concat", 00:21:18.807 "superblock": false, 00:21:18.807 "num_base_bdevs": 4, 00:21:18.807 "num_base_bdevs_discovered": 4, 00:21:18.807 "num_base_bdevs_operational": 4, 00:21:18.807 "base_bdevs_list": [ 00:21:18.807 { 00:21:18.807 "name": "NewBaseBdev", 00:21:18.807 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:18.807 "is_configured": true, 00:21:18.807 "data_offset": 0, 00:21:18.807 "data_size": 65536 00:21:18.807 }, 00:21:18.807 { 00:21:18.807 "name": "BaseBdev2", 00:21:18.807 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:18.807 "is_configured": true, 00:21:18.807 "data_offset": 0, 00:21:18.807 "data_size": 65536 00:21:18.807 }, 00:21:18.807 { 00:21:18.807 "name": "BaseBdev3", 00:21:18.807 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:18.807 "is_configured": true, 00:21:18.807 "data_offset": 0, 00:21:18.807 "data_size": 65536 00:21:18.807 }, 00:21:18.807 { 00:21:18.807 "name": "BaseBdev4", 00:21:18.807 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:18.807 "is_configured": true, 00:21:18.807 "data_offset": 0, 00:21:18.807 "data_size": 65536 00:21:18.807 } 00:21:18.807 ] 00:21:18.807 }' 00:21:18.807 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.807 15:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.064 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:19.064 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:19.064 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:19.064 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:19.064 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:19.064 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:19.064 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:19.064 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:19.322 [2024-07-23 15:15:14.628370] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.322 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:19.322 "name": "Existed_Raid", 00:21:19.322 "aliases": [ 00:21:19.322 "52ccef87-cbb5-46bb-a653-c9787822fbf8" 00:21:19.322 ], 00:21:19.322 "product_name": "Raid Volume", 00:21:19.322 "block_size": 512, 00:21:19.322 "num_blocks": 262144, 00:21:19.322 "uuid": "52ccef87-cbb5-46bb-a653-c9787822fbf8", 00:21:19.322 "assigned_rate_limits": { 00:21:19.322 "rw_ios_per_sec": 0, 00:21:19.322 "rw_mbytes_per_sec": 0, 00:21:19.322 "r_mbytes_per_sec": 0, 00:21:19.322 "w_mbytes_per_sec": 0 00:21:19.322 }, 00:21:19.322 "claimed": false, 00:21:19.322 "zoned": false, 00:21:19.322 "supported_io_types": { 00:21:19.322 "read": true, 00:21:19.322 "write": true, 00:21:19.322 "unmap": true, 00:21:19.322 "flush": true, 00:21:19.322 "reset": true, 00:21:19.322 "nvme_admin": false, 00:21:19.322 "nvme_io": false, 00:21:19.322 "nvme_io_md": false, 00:21:19.322 "write_zeroes": true, 00:21:19.322 "zcopy": false, 00:21:19.322 "get_zone_info": false, 00:21:19.322 "zone_management": false, 00:21:19.322 "zone_append": false, 00:21:19.322 "compare": false, 00:21:19.322 "compare_and_write": false, 00:21:19.322 "abort": false, 00:21:19.322 "seek_hole": false, 00:21:19.322 "seek_data": false, 00:21:19.322 "copy": false, 00:21:19.322 "nvme_iov_md": false 00:21:19.322 }, 00:21:19.322 "memory_domains": [ 00:21:19.322 { 00:21:19.322 "dma_device_id": "system", 00:21:19.322 "dma_device_type": 1 00:21:19.322 }, 00:21:19.322 { 00:21:19.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.322 "dma_device_type": 2 00:21:19.322 }, 00:21:19.322 { 00:21:19.322 "dma_device_id": "system", 00:21:19.323 "dma_device_type": 1 00:21:19.323 }, 00:21:19.323 { 00:21:19.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.323 "dma_device_type": 2 00:21:19.323 }, 00:21:19.323 { 00:21:19.323 "dma_device_id": "system", 00:21:19.323 "dma_device_type": 1 00:21:19.323 }, 00:21:19.323 { 00:21:19.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.323 "dma_device_type": 2 00:21:19.323 }, 00:21:19.323 { 00:21:19.323 "dma_device_id": "system", 00:21:19.323 "dma_device_type": 1 00:21:19.323 }, 00:21:19.323 { 00:21:19.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.323 "dma_device_type": 2 00:21:19.323 } 00:21:19.323 ], 00:21:19.323 "driver_specific": { 00:21:19.323 "raid": { 00:21:19.323 "uuid": "52ccef87-cbb5-46bb-a653-c9787822fbf8", 00:21:19.323 "strip_size_kb": 64, 00:21:19.323 "state": "online", 00:21:19.323 "raid_level": "concat", 00:21:19.323 "superblock": false, 00:21:19.323 "num_base_bdevs": 4, 00:21:19.323 "num_base_bdevs_discovered": 4, 00:21:19.323 "num_base_bdevs_operational": 4, 00:21:19.323 "base_bdevs_list": [ 00:21:19.323 { 00:21:19.323 "name": "NewBaseBdev", 00:21:19.323 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:19.323 "is_configured": true, 00:21:19.323 "data_offset": 0, 00:21:19.323 "data_size": 65536 00:21:19.323 }, 00:21:19.323 { 00:21:19.323 "name": "BaseBdev2", 00:21:19.323 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:19.323 "is_configured": true, 00:21:19.323 "data_offset": 0, 00:21:19.323 "data_size": 65536 00:21:19.323 }, 00:21:19.323 { 00:21:19.323 "name": "BaseBdev3", 00:21:19.323 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:19.323 "is_configured": true, 00:21:19.323 "data_offset": 0, 00:21:19.323 "data_size": 65536 00:21:19.323 }, 00:21:19.323 { 00:21:19.323 "name": "BaseBdev4", 00:21:19.323 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:19.323 "is_configured": true, 00:21:19.323 "data_offset": 0, 00:21:19.323 "data_size": 65536 00:21:19.323 } 00:21:19.323 ] 00:21:19.323 } 00:21:19.323 } 00:21:19.323 }' 00:21:19.323 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:19.323 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:19.323 BaseBdev2 00:21:19.323 BaseBdev3 00:21:19.323 BaseBdev4' 00:21:19.323 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:19.323 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:19.323 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:19.581 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:19.581 "name": "NewBaseBdev", 00:21:19.581 "aliases": [ 00:21:19.581 "ed623265-9d36-47f7-a59a-ad103eda93c7" 00:21:19.581 ], 00:21:19.581 "product_name": "Malloc disk", 00:21:19.581 "block_size": 512, 00:21:19.581 "num_blocks": 65536, 00:21:19.581 "uuid": "ed623265-9d36-47f7-a59a-ad103eda93c7", 00:21:19.581 "assigned_rate_limits": { 00:21:19.581 "rw_ios_per_sec": 0, 00:21:19.581 "rw_mbytes_per_sec": 0, 00:21:19.581 "r_mbytes_per_sec": 0, 00:21:19.581 "w_mbytes_per_sec": 0 00:21:19.581 }, 00:21:19.581 "claimed": true, 00:21:19.581 "claim_type": "exclusive_write", 00:21:19.582 "zoned": false, 00:21:19.582 "supported_io_types": { 00:21:19.582 "read": true, 00:21:19.582 "write": true, 00:21:19.582 "unmap": true, 00:21:19.582 "flush": true, 00:21:19.582 "reset": true, 00:21:19.582 "nvme_admin": false, 00:21:19.582 "nvme_io": false, 00:21:19.582 "nvme_io_md": false, 00:21:19.582 "write_zeroes": true, 00:21:19.582 "zcopy": true, 00:21:19.582 "get_zone_info": false, 00:21:19.582 "zone_management": false, 00:21:19.582 "zone_append": false, 00:21:19.582 "compare": false, 00:21:19.582 "compare_and_write": false, 00:21:19.582 "abort": true, 00:21:19.582 "seek_hole": false, 00:21:19.582 "seek_data": false, 00:21:19.582 "copy": true, 00:21:19.582 "nvme_iov_md": false 00:21:19.582 }, 00:21:19.582 "memory_domains": [ 00:21:19.582 { 00:21:19.582 "dma_device_id": "system", 00:21:19.582 "dma_device_type": 1 00:21:19.582 }, 00:21:19.582 { 00:21:19.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.582 "dma_device_type": 2 00:21:19.582 } 00:21:19.582 ], 00:21:19.582 "driver_specific": {} 00:21:19.582 }' 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:19.582 15:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:19.840 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:19.840 "name": "BaseBdev2", 00:21:19.840 "aliases": [ 00:21:19.840 "00d67b44-e8ba-46ca-9396-eb8aa57c9177" 00:21:19.840 ], 00:21:19.840 "product_name": "Malloc disk", 00:21:19.840 "block_size": 512, 00:21:19.840 "num_blocks": 65536, 00:21:19.840 "uuid": "00d67b44-e8ba-46ca-9396-eb8aa57c9177", 00:21:19.840 "assigned_rate_limits": { 00:21:19.840 "rw_ios_per_sec": 0, 00:21:19.840 "rw_mbytes_per_sec": 0, 00:21:19.840 "r_mbytes_per_sec": 0, 00:21:19.840 "w_mbytes_per_sec": 0 00:21:19.840 }, 00:21:19.840 "claimed": true, 00:21:19.840 "claim_type": "exclusive_write", 00:21:19.840 "zoned": false, 00:21:19.840 "supported_io_types": { 00:21:19.840 "read": true, 00:21:19.840 "write": true, 00:21:19.840 "unmap": true, 00:21:19.840 "flush": true, 00:21:19.840 "reset": true, 00:21:19.840 "nvme_admin": false, 00:21:19.840 "nvme_io": false, 00:21:19.840 "nvme_io_md": false, 00:21:19.840 "write_zeroes": true, 00:21:19.840 "zcopy": true, 00:21:19.840 "get_zone_info": false, 00:21:19.840 "zone_management": false, 00:21:19.840 "zone_append": false, 00:21:19.840 "compare": false, 00:21:19.840 "compare_and_write": false, 00:21:19.840 "abort": true, 00:21:19.840 "seek_hole": false, 00:21:19.840 "seek_data": false, 00:21:19.840 "copy": true, 00:21:19.840 "nvme_iov_md": false 00:21:19.840 }, 00:21:19.840 "memory_domains": [ 00:21:19.840 { 00:21:19.840 "dma_device_id": "system", 00:21:19.840 "dma_device_type": 1 00:21:19.840 }, 00:21:19.840 { 00:21:19.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.840 "dma_device_type": 2 00:21:19.840 } 00:21:19.840 ], 00:21:19.840 "driver_specific": {} 00:21:19.840 }' 00:21:19.840 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:19.840 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:19.840 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:19.840 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:20.099 "name": "BaseBdev3", 00:21:20.099 "aliases": [ 00:21:20.099 "cec48545-b0ad-4817-bc5a-5b0d14afcd72" 00:21:20.099 ], 00:21:20.099 "product_name": "Malloc disk", 00:21:20.099 "block_size": 512, 00:21:20.099 "num_blocks": 65536, 00:21:20.099 "uuid": "cec48545-b0ad-4817-bc5a-5b0d14afcd72", 00:21:20.099 "assigned_rate_limits": { 00:21:20.099 "rw_ios_per_sec": 0, 00:21:20.099 "rw_mbytes_per_sec": 0, 00:21:20.099 "r_mbytes_per_sec": 0, 00:21:20.099 "w_mbytes_per_sec": 0 00:21:20.099 }, 00:21:20.099 "claimed": true, 00:21:20.099 "claim_type": "exclusive_write", 00:21:20.099 "zoned": false, 00:21:20.099 "supported_io_types": { 00:21:20.099 "read": true, 00:21:20.099 "write": true, 00:21:20.099 "unmap": true, 00:21:20.099 "flush": true, 00:21:20.099 "reset": true, 00:21:20.099 "nvme_admin": false, 00:21:20.099 "nvme_io": false, 00:21:20.099 "nvme_io_md": false, 00:21:20.099 "write_zeroes": true, 00:21:20.099 "zcopy": true, 00:21:20.099 "get_zone_info": false, 00:21:20.099 "zone_management": false, 00:21:20.099 "zone_append": false, 00:21:20.099 "compare": false, 00:21:20.099 "compare_and_write": false, 00:21:20.099 "abort": true, 00:21:20.099 "seek_hole": false, 00:21:20.099 "seek_data": false, 00:21:20.099 "copy": true, 00:21:20.099 "nvme_iov_md": false 00:21:20.099 }, 00:21:20.099 "memory_domains": [ 00:21:20.099 { 00:21:20.099 "dma_device_id": "system", 00:21:20.099 "dma_device_type": 1 00:21:20.099 }, 00:21:20.099 { 00:21:20.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.099 "dma_device_type": 2 00:21:20.099 } 00:21:20.099 ], 00:21:20.099 "driver_specific": {} 00:21:20.099 }' 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.099 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:20.370 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:20.629 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:20.629 "name": "BaseBdev4", 00:21:20.629 "aliases": [ 00:21:20.629 "6af4513b-fbbf-4e8d-8d85-69ab8b302763" 00:21:20.629 ], 00:21:20.629 "product_name": "Malloc disk", 00:21:20.629 "block_size": 512, 00:21:20.629 "num_blocks": 65536, 00:21:20.629 "uuid": "6af4513b-fbbf-4e8d-8d85-69ab8b302763", 00:21:20.629 "assigned_rate_limits": { 00:21:20.629 "rw_ios_per_sec": 0, 00:21:20.629 "rw_mbytes_per_sec": 0, 00:21:20.629 "r_mbytes_per_sec": 0, 00:21:20.629 "w_mbytes_per_sec": 0 00:21:20.629 }, 00:21:20.629 "claimed": true, 00:21:20.629 "claim_type": "exclusive_write", 00:21:20.629 "zoned": false, 00:21:20.629 "supported_io_types": { 00:21:20.629 "read": true, 00:21:20.629 "write": true, 00:21:20.629 "unmap": true, 00:21:20.629 "flush": true, 00:21:20.629 "reset": true, 00:21:20.629 "nvme_admin": false, 00:21:20.630 "nvme_io": false, 00:21:20.630 "nvme_io_md": false, 00:21:20.630 "write_zeroes": true, 00:21:20.630 "zcopy": true, 00:21:20.630 "get_zone_info": false, 00:21:20.630 "zone_management": false, 00:21:20.630 "zone_append": false, 00:21:20.630 "compare": false, 00:21:20.630 "compare_and_write": false, 00:21:20.630 "abort": true, 00:21:20.630 "seek_hole": false, 00:21:20.630 "seek_data": false, 00:21:20.630 "copy": true, 00:21:20.630 "nvme_iov_md": false 00:21:20.630 }, 00:21:20.630 "memory_domains": [ 00:21:20.630 { 00:21:20.630 "dma_device_id": "system", 00:21:20.630 "dma_device_type": 1 00:21:20.630 }, 00:21:20.630 { 00:21:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.630 "dma_device_type": 2 00:21:20.630 } 00:21:20.630 ], 00:21:20.630 "driver_specific": {} 00:21:20.630 }' 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:20.630 15:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:20.888 [2024-07-23 15:15:16.196345] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:20.888 [2024-07-23 15:15:16.196391] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:20.888 [2024-07-23 15:15:16.196473] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.888 [2024-07-23 15:15:16.196545] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:20.888 [2024-07-23 15:15:16.196558] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name Existed_Raid, state offline 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 101591 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 101591 ']' 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 101591 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101591 00:21:20.888 killing process with pid 101591 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101591' 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 101591 00:21:20.888 [2024-07-23 15:15:16.253836] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:20.888 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 101591 00:21:20.888 [2024-07-23 15:15:16.299643] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:21.147 15:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:21.147 00:21:21.147 real 0m24.249s 00:21:21.147 user 0m42.286s 00:21:21.147 sys 0m5.353s 00:21:21.147 ************************************ 00:21:21.147 END TEST raid_state_function_test 00:21:21.147 ************************************ 00:21:21.147 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.147 15:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.405 15:15:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:21.405 15:15:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:21:21.405 15:15:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:21.405 15:15:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.405 15:15:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:21.405 ************************************ 00:21:21.405 START TEST raid_state_function_test_sb 00:21:21.405 ************************************ 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:21.405 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:21.406 Process raid pid: 102554 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=102554 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 102554' 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 102554 /var/tmp/spdk-raid.sock 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 102554 ']' 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:21.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.406 15:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.406 [2024-07-23 15:15:16.695757] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:21:21.406 [2024-07-23 15:15:16.696171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.664 [2024-07-23 15:15:16.853305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.664 [2024-07-23 15:15:16.908811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.664 [2024-07-23 15:15:16.961377] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:22.229 15:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.229 15:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:21:22.229 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:22.488 [2024-07-23 15:15:17.877958] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:22.488 [2024-07-23 15:15:17.878022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:22.488 [2024-07-23 15:15:17.878034] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:22.488 [2024-07-23 15:15:17.878047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:22.488 [2024-07-23 15:15:17.878058] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:22.488 [2024-07-23 15:15:17.878071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:22.488 [2024-07-23 15:15:17.878079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:22.488 [2024-07-23 15:15:17.878096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.488 15:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.766 15:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.767 "name": "Existed_Raid", 00:21:22.767 "uuid": "1d8b6568-687f-46aa-9f7e-518f858a0eb1", 00:21:22.767 "strip_size_kb": 64, 00:21:22.767 "state": "configuring", 00:21:22.767 "raid_level": "concat", 00:21:22.767 "superblock": true, 00:21:22.767 "num_base_bdevs": 4, 00:21:22.767 "num_base_bdevs_discovered": 0, 00:21:22.767 "num_base_bdevs_operational": 4, 00:21:22.767 "base_bdevs_list": [ 00:21:22.767 { 00:21:22.767 "name": "BaseBdev1", 00:21:22.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.767 "is_configured": false, 00:21:22.767 "data_offset": 0, 00:21:22.767 "data_size": 0 00:21:22.767 }, 00:21:22.767 { 00:21:22.767 "name": "BaseBdev2", 00:21:22.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.767 "is_configured": false, 00:21:22.767 "data_offset": 0, 00:21:22.767 "data_size": 0 00:21:22.767 }, 00:21:22.767 { 00:21:22.767 "name": "BaseBdev3", 00:21:22.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.767 "is_configured": false, 00:21:22.767 "data_offset": 0, 00:21:22.767 "data_size": 0 00:21:22.767 }, 00:21:22.767 { 00:21:22.767 "name": "BaseBdev4", 00:21:22.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.767 "is_configured": false, 00:21:22.767 "data_offset": 0, 00:21:22.767 "data_size": 0 00:21:22.767 } 00:21:22.767 ] 00:21:22.767 }' 00:21:22.767 15:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.767 15:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.333 15:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:23.333 [2024-07-23 15:15:18.637966] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:23.333 [2024-07-23 15:15:18.638200] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:21:23.333 15:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:23.591 [2024-07-23 15:15:18.898076] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:23.591 [2024-07-23 15:15:18.898139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:23.592 [2024-07-23 15:15:18.898150] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:23.592 [2024-07-23 15:15:18.898164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:23.592 [2024-07-23 15:15:18.898172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:23.592 [2024-07-23 15:15:18.898184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:23.592 [2024-07-23 15:15:18.898192] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:23.592 [2024-07-23 15:15:18.898204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:23.592 15:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:23.850 [2024-07-23 15:15:19.151724] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:23.850 BaseBdev1 00:21:23.850 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:23.850 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:23.850 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:23.850 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:23.850 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:23.850 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:23.850 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:24.108 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:24.367 [ 00:21:24.367 { 00:21:24.367 "name": "BaseBdev1", 00:21:24.367 "aliases": [ 00:21:24.367 "2af8cfb2-3374-438a-a192-62c3f7413ed1" 00:21:24.367 ], 00:21:24.367 "product_name": "Malloc disk", 00:21:24.367 "block_size": 512, 00:21:24.367 "num_blocks": 65536, 00:21:24.367 "uuid": "2af8cfb2-3374-438a-a192-62c3f7413ed1", 00:21:24.367 "assigned_rate_limits": { 00:21:24.367 "rw_ios_per_sec": 0, 00:21:24.367 "rw_mbytes_per_sec": 0, 00:21:24.367 "r_mbytes_per_sec": 0, 00:21:24.367 "w_mbytes_per_sec": 0 00:21:24.367 }, 00:21:24.367 "claimed": true, 00:21:24.367 "claim_type": "exclusive_write", 00:21:24.367 "zoned": false, 00:21:24.367 "supported_io_types": { 00:21:24.367 "read": true, 00:21:24.367 "write": true, 00:21:24.367 "unmap": true, 00:21:24.367 "flush": true, 00:21:24.367 "reset": true, 00:21:24.367 "nvme_admin": false, 00:21:24.367 "nvme_io": false, 00:21:24.367 "nvme_io_md": false, 00:21:24.367 "write_zeroes": true, 00:21:24.367 "zcopy": true, 00:21:24.367 "get_zone_info": false, 00:21:24.367 "zone_management": false, 00:21:24.367 "zone_append": false, 00:21:24.367 "compare": false, 00:21:24.367 "compare_and_write": false, 00:21:24.367 "abort": true, 00:21:24.367 "seek_hole": false, 00:21:24.367 "seek_data": false, 00:21:24.367 "copy": true, 00:21:24.367 "nvme_iov_md": false 00:21:24.367 }, 00:21:24.367 "memory_domains": [ 00:21:24.367 { 00:21:24.367 "dma_device_id": "system", 00:21:24.367 "dma_device_type": 1 00:21:24.367 }, 00:21:24.367 { 00:21:24.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.367 "dma_device_type": 2 00:21:24.367 } 00:21:24.367 ], 00:21:24.367 "driver_specific": {} 00:21:24.367 } 00:21:24.367 ] 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.367 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.368 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.368 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.368 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.627 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.627 "name": "Existed_Raid", 00:21:24.627 "uuid": "f689ad2c-526c-4bd7-8964-31430632b59b", 00:21:24.627 "strip_size_kb": 64, 00:21:24.627 "state": "configuring", 00:21:24.627 "raid_level": "concat", 00:21:24.627 "superblock": true, 00:21:24.627 "num_base_bdevs": 4, 00:21:24.627 "num_base_bdevs_discovered": 1, 00:21:24.627 "num_base_bdevs_operational": 4, 00:21:24.627 "base_bdevs_list": [ 00:21:24.627 { 00:21:24.627 "name": "BaseBdev1", 00:21:24.627 "uuid": "2af8cfb2-3374-438a-a192-62c3f7413ed1", 00:21:24.627 "is_configured": true, 00:21:24.627 "data_offset": 2048, 00:21:24.627 "data_size": 63488 00:21:24.627 }, 00:21:24.627 { 00:21:24.627 "name": "BaseBdev2", 00:21:24.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.627 "is_configured": false, 00:21:24.627 "data_offset": 0, 00:21:24.627 "data_size": 0 00:21:24.627 }, 00:21:24.627 { 00:21:24.627 "name": "BaseBdev3", 00:21:24.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.627 "is_configured": false, 00:21:24.627 "data_offset": 0, 00:21:24.627 "data_size": 0 00:21:24.627 }, 00:21:24.627 { 00:21:24.627 "name": "BaseBdev4", 00:21:24.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.627 "is_configured": false, 00:21:24.627 "data_offset": 0, 00:21:24.627 "data_size": 0 00:21:24.627 } 00:21:24.627 ] 00:21:24.627 }' 00:21:24.627 15:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.627 15:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.885 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:24.885 [2024-07-23 15:15:20.304057] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:24.885 [2024-07-23 15:15:20.304123] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:25.144 [2024-07-23 15:15:20.552181] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.144 [2024-07-23 15:15:20.554646] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.144 [2024-07-23 15:15:20.554852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.144 [2024-07-23 15:15:20.554875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.144 [2024-07-23 15:15:20.554891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.144 [2024-07-23 15:15:20.554900] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:25.144 [2024-07-23 15:15:20.554914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.144 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.403 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.403 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.403 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.403 "name": "Existed_Raid", 00:21:25.403 "uuid": "7a579ad7-6a21-4611-8b84-d9f37eaedd72", 00:21:25.403 "strip_size_kb": 64, 00:21:25.403 "state": "configuring", 00:21:25.403 "raid_level": "concat", 00:21:25.403 "superblock": true, 00:21:25.403 "num_base_bdevs": 4, 00:21:25.403 "num_base_bdevs_discovered": 1, 00:21:25.403 "num_base_bdevs_operational": 4, 00:21:25.403 "base_bdevs_list": [ 00:21:25.403 { 00:21:25.403 "name": "BaseBdev1", 00:21:25.403 "uuid": "2af8cfb2-3374-438a-a192-62c3f7413ed1", 00:21:25.403 "is_configured": true, 00:21:25.403 "data_offset": 2048, 00:21:25.403 "data_size": 63488 00:21:25.403 }, 00:21:25.403 { 00:21:25.403 "name": "BaseBdev2", 00:21:25.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.403 "is_configured": false, 00:21:25.403 "data_offset": 0, 00:21:25.403 "data_size": 0 00:21:25.403 }, 00:21:25.403 { 00:21:25.403 "name": "BaseBdev3", 00:21:25.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.403 "is_configured": false, 00:21:25.403 "data_offset": 0, 00:21:25.403 "data_size": 0 00:21:25.403 }, 00:21:25.403 { 00:21:25.403 "name": "BaseBdev4", 00:21:25.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.403 "is_configured": false, 00:21:25.403 "data_offset": 0, 00:21:25.403 "data_size": 0 00:21:25.403 } 00:21:25.403 ] 00:21:25.403 }' 00:21:25.403 15:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.403 15:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.661 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:25.919 [2024-07-23 15:15:21.179363] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.919 BaseBdev2 00:21:25.919 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:25.919 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:25.920 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:25.920 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:25.920 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:25.920 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:25.920 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:26.178 [ 00:21:26.178 { 00:21:26.178 "name": "BaseBdev2", 00:21:26.178 "aliases": [ 00:21:26.178 "c5c897f7-34eb-4f99-ac6b-c10517d3500d" 00:21:26.178 ], 00:21:26.178 "product_name": "Malloc disk", 00:21:26.178 "block_size": 512, 00:21:26.178 "num_blocks": 65536, 00:21:26.178 "uuid": "c5c897f7-34eb-4f99-ac6b-c10517d3500d", 00:21:26.178 "assigned_rate_limits": { 00:21:26.178 "rw_ios_per_sec": 0, 00:21:26.178 "rw_mbytes_per_sec": 0, 00:21:26.178 "r_mbytes_per_sec": 0, 00:21:26.178 "w_mbytes_per_sec": 0 00:21:26.178 }, 00:21:26.178 "claimed": true, 00:21:26.178 "claim_type": "exclusive_write", 00:21:26.178 "zoned": false, 00:21:26.178 "supported_io_types": { 00:21:26.178 "read": true, 00:21:26.178 "write": true, 00:21:26.178 "unmap": true, 00:21:26.178 "flush": true, 00:21:26.178 "reset": true, 00:21:26.178 "nvme_admin": false, 00:21:26.178 "nvme_io": false, 00:21:26.178 "nvme_io_md": false, 00:21:26.178 "write_zeroes": true, 00:21:26.178 "zcopy": true, 00:21:26.178 "get_zone_info": false, 00:21:26.178 "zone_management": false, 00:21:26.178 "zone_append": false, 00:21:26.178 "compare": false, 00:21:26.178 "compare_and_write": false, 00:21:26.178 "abort": true, 00:21:26.178 "seek_hole": false, 00:21:26.178 "seek_data": false, 00:21:26.178 "copy": true, 00:21:26.178 "nvme_iov_md": false 00:21:26.178 }, 00:21:26.178 "memory_domains": [ 00:21:26.178 { 00:21:26.178 "dma_device_id": "system", 00:21:26.178 "dma_device_type": 1 00:21:26.178 }, 00:21:26.178 { 00:21:26.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.178 "dma_device_type": 2 00:21:26.178 } 00:21:26.178 ], 00:21:26.178 "driver_specific": {} 00:21:26.178 } 00:21:26.178 ] 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.178 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.437 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.437 "name": "Existed_Raid", 00:21:26.437 "uuid": "7a579ad7-6a21-4611-8b84-d9f37eaedd72", 00:21:26.437 "strip_size_kb": 64, 00:21:26.437 "state": "configuring", 00:21:26.437 "raid_level": "concat", 00:21:26.437 "superblock": true, 00:21:26.437 "num_base_bdevs": 4, 00:21:26.437 "num_base_bdevs_discovered": 2, 00:21:26.437 "num_base_bdevs_operational": 4, 00:21:26.437 "base_bdevs_list": [ 00:21:26.437 { 00:21:26.437 "name": "BaseBdev1", 00:21:26.437 "uuid": "2af8cfb2-3374-438a-a192-62c3f7413ed1", 00:21:26.437 "is_configured": true, 00:21:26.437 "data_offset": 2048, 00:21:26.437 "data_size": 63488 00:21:26.437 }, 00:21:26.437 { 00:21:26.437 "name": "BaseBdev2", 00:21:26.437 "uuid": "c5c897f7-34eb-4f99-ac6b-c10517d3500d", 00:21:26.437 "is_configured": true, 00:21:26.437 "data_offset": 2048, 00:21:26.437 "data_size": 63488 00:21:26.437 }, 00:21:26.437 { 00:21:26.437 "name": "BaseBdev3", 00:21:26.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.437 "is_configured": false, 00:21:26.437 "data_offset": 0, 00:21:26.437 "data_size": 0 00:21:26.437 }, 00:21:26.437 { 00:21:26.437 "name": "BaseBdev4", 00:21:26.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.437 "is_configured": false, 00:21:26.437 "data_offset": 0, 00:21:26.437 "data_size": 0 00:21:26.437 } 00:21:26.437 ] 00:21:26.437 }' 00:21:26.437 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.437 15:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.696 15:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:26.954 [2024-07-23 15:15:22.230945] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.954 BaseBdev3 00:21:26.954 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:26.954 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:26.954 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:26.954 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:26.954 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:26.954 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:26.954 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:27.213 [ 00:21:27.213 { 00:21:27.213 "name": "BaseBdev3", 00:21:27.213 "aliases": [ 00:21:27.213 "c0c2cacd-7826-493f-9296-6286f896d3bb" 00:21:27.213 ], 00:21:27.213 "product_name": "Malloc disk", 00:21:27.213 "block_size": 512, 00:21:27.213 "num_blocks": 65536, 00:21:27.213 "uuid": "c0c2cacd-7826-493f-9296-6286f896d3bb", 00:21:27.213 "assigned_rate_limits": { 00:21:27.213 "rw_ios_per_sec": 0, 00:21:27.213 "rw_mbytes_per_sec": 0, 00:21:27.213 "r_mbytes_per_sec": 0, 00:21:27.213 "w_mbytes_per_sec": 0 00:21:27.213 }, 00:21:27.213 "claimed": true, 00:21:27.213 "claim_type": "exclusive_write", 00:21:27.213 "zoned": false, 00:21:27.213 "supported_io_types": { 00:21:27.213 "read": true, 00:21:27.213 "write": true, 00:21:27.213 "unmap": true, 00:21:27.213 "flush": true, 00:21:27.213 "reset": true, 00:21:27.213 "nvme_admin": false, 00:21:27.213 "nvme_io": false, 00:21:27.213 "nvme_io_md": false, 00:21:27.213 "write_zeroes": true, 00:21:27.213 "zcopy": true, 00:21:27.213 "get_zone_info": false, 00:21:27.213 "zone_management": false, 00:21:27.213 "zone_append": false, 00:21:27.213 "compare": false, 00:21:27.213 "compare_and_write": false, 00:21:27.213 "abort": true, 00:21:27.213 "seek_hole": false, 00:21:27.213 "seek_data": false, 00:21:27.213 "copy": true, 00:21:27.213 "nvme_iov_md": false 00:21:27.213 }, 00:21:27.213 "memory_domains": [ 00:21:27.213 { 00:21:27.213 "dma_device_id": "system", 00:21:27.213 "dma_device_type": 1 00:21:27.213 }, 00:21:27.213 { 00:21:27.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.213 "dma_device_type": 2 00:21:27.213 } 00:21:27.213 ], 00:21:27.213 "driver_specific": {} 00:21:27.213 } 00:21:27.213 ] 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.213 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.472 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.472 "name": "Existed_Raid", 00:21:27.472 "uuid": "7a579ad7-6a21-4611-8b84-d9f37eaedd72", 00:21:27.472 "strip_size_kb": 64, 00:21:27.472 "state": "configuring", 00:21:27.472 "raid_level": "concat", 00:21:27.472 "superblock": true, 00:21:27.472 "num_base_bdevs": 4, 00:21:27.472 "num_base_bdevs_discovered": 3, 00:21:27.472 "num_base_bdevs_operational": 4, 00:21:27.472 "base_bdevs_list": [ 00:21:27.472 { 00:21:27.472 "name": "BaseBdev1", 00:21:27.472 "uuid": "2af8cfb2-3374-438a-a192-62c3f7413ed1", 00:21:27.472 "is_configured": true, 00:21:27.472 "data_offset": 2048, 00:21:27.472 "data_size": 63488 00:21:27.472 }, 00:21:27.472 { 00:21:27.472 "name": "BaseBdev2", 00:21:27.472 "uuid": "c5c897f7-34eb-4f99-ac6b-c10517d3500d", 00:21:27.472 "is_configured": true, 00:21:27.472 "data_offset": 2048, 00:21:27.472 "data_size": 63488 00:21:27.472 }, 00:21:27.472 { 00:21:27.472 "name": "BaseBdev3", 00:21:27.472 "uuid": "c0c2cacd-7826-493f-9296-6286f896d3bb", 00:21:27.472 "is_configured": true, 00:21:27.472 "data_offset": 2048, 00:21:27.472 "data_size": 63488 00:21:27.472 }, 00:21:27.472 { 00:21:27.472 "name": "BaseBdev4", 00:21:27.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.472 "is_configured": false, 00:21:27.472 "data_offset": 0, 00:21:27.472 "data_size": 0 00:21:27.472 } 00:21:27.472 ] 00:21:27.472 }' 00:21:27.472 15:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.472 15:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.730 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:27.988 [2024-07-23 15:15:23.302489] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:27.988 [2024-07-23 15:15:23.302708] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:21:27.988 [2024-07-23 15:15:23.302725] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:27.988 [2024-07-23 15:15:23.302866] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:21:27.988 BaseBdev4 00:21:27.988 [2024-07-23 15:15:23.303217] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:21:27.988 [2024-07-23 15:15:23.303241] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:21:27.988 [2024-07-23 15:15:23.303359] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.988 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:27.988 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:27.988 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:27.988 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:27.988 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:27.988 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:27.988 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:28.247 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:28.505 [ 00:21:28.505 { 00:21:28.505 "name": "BaseBdev4", 00:21:28.505 "aliases": [ 00:21:28.505 "ed01413f-eac9-4188-bcb4-9de81ae53559" 00:21:28.505 ], 00:21:28.505 "product_name": "Malloc disk", 00:21:28.505 "block_size": 512, 00:21:28.505 "num_blocks": 65536, 00:21:28.505 "uuid": "ed01413f-eac9-4188-bcb4-9de81ae53559", 00:21:28.505 "assigned_rate_limits": { 00:21:28.505 "rw_ios_per_sec": 0, 00:21:28.505 "rw_mbytes_per_sec": 0, 00:21:28.505 "r_mbytes_per_sec": 0, 00:21:28.505 "w_mbytes_per_sec": 0 00:21:28.505 }, 00:21:28.505 "claimed": true, 00:21:28.505 "claim_type": "exclusive_write", 00:21:28.505 "zoned": false, 00:21:28.505 "supported_io_types": { 00:21:28.505 "read": true, 00:21:28.505 "write": true, 00:21:28.505 "unmap": true, 00:21:28.505 "flush": true, 00:21:28.505 "reset": true, 00:21:28.505 "nvme_admin": false, 00:21:28.505 "nvme_io": false, 00:21:28.505 "nvme_io_md": false, 00:21:28.505 "write_zeroes": true, 00:21:28.505 "zcopy": true, 00:21:28.505 "get_zone_info": false, 00:21:28.505 "zone_management": false, 00:21:28.505 "zone_append": false, 00:21:28.505 "compare": false, 00:21:28.505 "compare_and_write": false, 00:21:28.505 "abort": true, 00:21:28.505 "seek_hole": false, 00:21:28.505 "seek_data": false, 00:21:28.505 "copy": true, 00:21:28.505 "nvme_iov_md": false 00:21:28.505 }, 00:21:28.505 "memory_domains": [ 00:21:28.505 { 00:21:28.505 "dma_device_id": "system", 00:21:28.505 "dma_device_type": 1 00:21:28.505 }, 00:21:28.505 { 00:21:28.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.505 "dma_device_type": 2 00:21:28.505 } 00:21:28.505 ], 00:21:28.505 "driver_specific": {} 00:21:28.505 } 00:21:28.505 ] 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.505 15:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.764 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.764 "name": "Existed_Raid", 00:21:28.764 "uuid": "7a579ad7-6a21-4611-8b84-d9f37eaedd72", 00:21:28.764 "strip_size_kb": 64, 00:21:28.764 "state": "online", 00:21:28.764 "raid_level": "concat", 00:21:28.764 "superblock": true, 00:21:28.764 "num_base_bdevs": 4, 00:21:28.764 "num_base_bdevs_discovered": 4, 00:21:28.764 "num_base_bdevs_operational": 4, 00:21:28.764 "base_bdevs_list": [ 00:21:28.764 { 00:21:28.764 "name": "BaseBdev1", 00:21:28.764 "uuid": "2af8cfb2-3374-438a-a192-62c3f7413ed1", 00:21:28.764 "is_configured": true, 00:21:28.764 "data_offset": 2048, 00:21:28.764 "data_size": 63488 00:21:28.764 }, 00:21:28.764 { 00:21:28.764 "name": "BaseBdev2", 00:21:28.764 "uuid": "c5c897f7-34eb-4f99-ac6b-c10517d3500d", 00:21:28.764 "is_configured": true, 00:21:28.764 "data_offset": 2048, 00:21:28.764 "data_size": 63488 00:21:28.764 }, 00:21:28.764 { 00:21:28.764 "name": "BaseBdev3", 00:21:28.764 "uuid": "c0c2cacd-7826-493f-9296-6286f896d3bb", 00:21:28.764 "is_configured": true, 00:21:28.764 "data_offset": 2048, 00:21:28.764 "data_size": 63488 00:21:28.764 }, 00:21:28.764 { 00:21:28.764 "name": "BaseBdev4", 00:21:28.764 "uuid": "ed01413f-eac9-4188-bcb4-9de81ae53559", 00:21:28.764 "is_configured": true, 00:21:28.764 "data_offset": 2048, 00:21:28.764 "data_size": 63488 00:21:28.764 } 00:21:28.764 ] 00:21:28.764 }' 00:21:28.764 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.764 15:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.022 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:29.022 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:29.022 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:29.022 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:29.022 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:29.022 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:29.022 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:29.022 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:29.281 [2024-07-23 15:15:24.483203] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:29.281 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:29.281 "name": "Existed_Raid", 00:21:29.281 "aliases": [ 00:21:29.281 "7a579ad7-6a21-4611-8b84-d9f37eaedd72" 00:21:29.281 ], 00:21:29.281 "product_name": "Raid Volume", 00:21:29.281 "block_size": 512, 00:21:29.281 "num_blocks": 253952, 00:21:29.281 "uuid": "7a579ad7-6a21-4611-8b84-d9f37eaedd72", 00:21:29.281 "assigned_rate_limits": { 00:21:29.281 "rw_ios_per_sec": 0, 00:21:29.281 "rw_mbytes_per_sec": 0, 00:21:29.281 "r_mbytes_per_sec": 0, 00:21:29.281 "w_mbytes_per_sec": 0 00:21:29.281 }, 00:21:29.281 "claimed": false, 00:21:29.281 "zoned": false, 00:21:29.281 "supported_io_types": { 00:21:29.281 "read": true, 00:21:29.281 "write": true, 00:21:29.281 "unmap": true, 00:21:29.281 "flush": true, 00:21:29.281 "reset": true, 00:21:29.281 "nvme_admin": false, 00:21:29.281 "nvme_io": false, 00:21:29.281 "nvme_io_md": false, 00:21:29.281 "write_zeroes": true, 00:21:29.281 "zcopy": false, 00:21:29.281 "get_zone_info": false, 00:21:29.281 "zone_management": false, 00:21:29.281 "zone_append": false, 00:21:29.281 "compare": false, 00:21:29.281 "compare_and_write": false, 00:21:29.281 "abort": false, 00:21:29.281 "seek_hole": false, 00:21:29.281 "seek_data": false, 00:21:29.281 "copy": false, 00:21:29.281 "nvme_iov_md": false 00:21:29.281 }, 00:21:29.281 "memory_domains": [ 00:21:29.281 { 00:21:29.281 "dma_device_id": "system", 00:21:29.281 "dma_device_type": 1 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.281 "dma_device_type": 2 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "dma_device_id": "system", 00:21:29.281 "dma_device_type": 1 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.281 "dma_device_type": 2 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "dma_device_id": "system", 00:21:29.281 "dma_device_type": 1 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.281 "dma_device_type": 2 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "dma_device_id": "system", 00:21:29.281 "dma_device_type": 1 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.281 "dma_device_type": 2 00:21:29.281 } 00:21:29.281 ], 00:21:29.281 "driver_specific": { 00:21:29.281 "raid": { 00:21:29.281 "uuid": "7a579ad7-6a21-4611-8b84-d9f37eaedd72", 00:21:29.281 "strip_size_kb": 64, 00:21:29.281 "state": "online", 00:21:29.281 "raid_level": "concat", 00:21:29.281 "superblock": true, 00:21:29.281 "num_base_bdevs": 4, 00:21:29.281 "num_base_bdevs_discovered": 4, 00:21:29.281 "num_base_bdevs_operational": 4, 00:21:29.281 "base_bdevs_list": [ 00:21:29.281 { 00:21:29.281 "name": "BaseBdev1", 00:21:29.281 "uuid": "2af8cfb2-3374-438a-a192-62c3f7413ed1", 00:21:29.281 "is_configured": true, 00:21:29.281 "data_offset": 2048, 00:21:29.281 "data_size": 63488 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "name": "BaseBdev2", 00:21:29.281 "uuid": "c5c897f7-34eb-4f99-ac6b-c10517d3500d", 00:21:29.281 "is_configured": true, 00:21:29.281 "data_offset": 2048, 00:21:29.281 "data_size": 63488 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "name": "BaseBdev3", 00:21:29.281 "uuid": "c0c2cacd-7826-493f-9296-6286f896d3bb", 00:21:29.281 "is_configured": true, 00:21:29.281 "data_offset": 2048, 00:21:29.281 "data_size": 63488 00:21:29.281 }, 00:21:29.281 { 00:21:29.281 "name": "BaseBdev4", 00:21:29.281 "uuid": "ed01413f-eac9-4188-bcb4-9de81ae53559", 00:21:29.281 "is_configured": true, 00:21:29.281 "data_offset": 2048, 00:21:29.281 "data_size": 63488 00:21:29.281 } 00:21:29.281 ] 00:21:29.281 } 00:21:29.281 } 00:21:29.281 }' 00:21:29.281 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:29.281 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:29.281 BaseBdev2 00:21:29.281 BaseBdev3 00:21:29.281 BaseBdev4' 00:21:29.281 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:29.281 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:29.281 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:29.540 "name": "BaseBdev1", 00:21:29.540 "aliases": [ 00:21:29.540 "2af8cfb2-3374-438a-a192-62c3f7413ed1" 00:21:29.540 ], 00:21:29.540 "product_name": "Malloc disk", 00:21:29.540 "block_size": 512, 00:21:29.540 "num_blocks": 65536, 00:21:29.540 "uuid": "2af8cfb2-3374-438a-a192-62c3f7413ed1", 00:21:29.540 "assigned_rate_limits": { 00:21:29.540 "rw_ios_per_sec": 0, 00:21:29.540 "rw_mbytes_per_sec": 0, 00:21:29.540 "r_mbytes_per_sec": 0, 00:21:29.540 "w_mbytes_per_sec": 0 00:21:29.540 }, 00:21:29.540 "claimed": true, 00:21:29.540 "claim_type": "exclusive_write", 00:21:29.540 "zoned": false, 00:21:29.540 "supported_io_types": { 00:21:29.540 "read": true, 00:21:29.540 "write": true, 00:21:29.540 "unmap": true, 00:21:29.540 "flush": true, 00:21:29.540 "reset": true, 00:21:29.540 "nvme_admin": false, 00:21:29.540 "nvme_io": false, 00:21:29.540 "nvme_io_md": false, 00:21:29.540 "write_zeroes": true, 00:21:29.540 "zcopy": true, 00:21:29.540 "get_zone_info": false, 00:21:29.540 "zone_management": false, 00:21:29.540 "zone_append": false, 00:21:29.540 "compare": false, 00:21:29.540 "compare_and_write": false, 00:21:29.540 "abort": true, 00:21:29.540 "seek_hole": false, 00:21:29.540 "seek_data": false, 00:21:29.540 "copy": true, 00:21:29.540 "nvme_iov_md": false 00:21:29.540 }, 00:21:29.540 "memory_domains": [ 00:21:29.540 { 00:21:29.540 "dma_device_id": "system", 00:21:29.540 "dma_device_type": 1 00:21:29.540 }, 00:21:29.540 { 00:21:29.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.540 "dma_device_type": 2 00:21:29.540 } 00:21:29.540 ], 00:21:29.540 "driver_specific": {} 00:21:29.540 }' 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:29.540 15:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:29.798 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:29.798 "name": "BaseBdev2", 00:21:29.798 "aliases": [ 00:21:29.798 "c5c897f7-34eb-4f99-ac6b-c10517d3500d" 00:21:29.798 ], 00:21:29.798 "product_name": "Malloc disk", 00:21:29.798 "block_size": 512, 00:21:29.798 "num_blocks": 65536, 00:21:29.798 "uuid": "c5c897f7-34eb-4f99-ac6b-c10517d3500d", 00:21:29.798 "assigned_rate_limits": { 00:21:29.798 "rw_ios_per_sec": 0, 00:21:29.798 "rw_mbytes_per_sec": 0, 00:21:29.798 "r_mbytes_per_sec": 0, 00:21:29.798 "w_mbytes_per_sec": 0 00:21:29.798 }, 00:21:29.798 "claimed": true, 00:21:29.798 "claim_type": "exclusive_write", 00:21:29.798 "zoned": false, 00:21:29.798 "supported_io_types": { 00:21:29.798 "read": true, 00:21:29.798 "write": true, 00:21:29.798 "unmap": true, 00:21:29.798 "flush": true, 00:21:29.798 "reset": true, 00:21:29.798 "nvme_admin": false, 00:21:29.798 "nvme_io": false, 00:21:29.798 "nvme_io_md": false, 00:21:29.798 "write_zeroes": true, 00:21:29.798 "zcopy": true, 00:21:29.798 "get_zone_info": false, 00:21:29.798 "zone_management": false, 00:21:29.798 "zone_append": false, 00:21:29.798 "compare": false, 00:21:29.798 "compare_and_write": false, 00:21:29.798 "abort": true, 00:21:29.798 "seek_hole": false, 00:21:29.798 "seek_data": false, 00:21:29.798 "copy": true, 00:21:29.798 "nvme_iov_md": false 00:21:29.798 }, 00:21:29.798 "memory_domains": [ 00:21:29.798 { 00:21:29.798 "dma_device_id": "system", 00:21:29.798 "dma_device_type": 1 00:21:29.799 }, 00:21:29.799 { 00:21:29.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.799 "dma_device_type": 2 00:21:29.799 } 00:21:29.799 ], 00:21:29.799 "driver_specific": {} 00:21:29.799 }' 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:29.799 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:30.366 "name": "BaseBdev3", 00:21:30.366 "aliases": [ 00:21:30.366 "c0c2cacd-7826-493f-9296-6286f896d3bb" 00:21:30.366 ], 00:21:30.366 "product_name": "Malloc disk", 00:21:30.366 "block_size": 512, 00:21:30.366 "num_blocks": 65536, 00:21:30.366 "uuid": "c0c2cacd-7826-493f-9296-6286f896d3bb", 00:21:30.366 "assigned_rate_limits": { 00:21:30.366 "rw_ios_per_sec": 0, 00:21:30.366 "rw_mbytes_per_sec": 0, 00:21:30.366 "r_mbytes_per_sec": 0, 00:21:30.366 "w_mbytes_per_sec": 0 00:21:30.366 }, 00:21:30.366 "claimed": true, 00:21:30.366 "claim_type": "exclusive_write", 00:21:30.366 "zoned": false, 00:21:30.366 "supported_io_types": { 00:21:30.366 "read": true, 00:21:30.366 "write": true, 00:21:30.366 "unmap": true, 00:21:30.366 "flush": true, 00:21:30.366 "reset": true, 00:21:30.366 "nvme_admin": false, 00:21:30.366 "nvme_io": false, 00:21:30.366 "nvme_io_md": false, 00:21:30.366 "write_zeroes": true, 00:21:30.366 "zcopy": true, 00:21:30.366 "get_zone_info": false, 00:21:30.366 "zone_management": false, 00:21:30.366 "zone_append": false, 00:21:30.366 "compare": false, 00:21:30.366 "compare_and_write": false, 00:21:30.366 "abort": true, 00:21:30.366 "seek_hole": false, 00:21:30.366 "seek_data": false, 00:21:30.366 "copy": true, 00:21:30.366 "nvme_iov_md": false 00:21:30.366 }, 00:21:30.366 "memory_domains": [ 00:21:30.366 { 00:21:30.366 "dma_device_id": "system", 00:21:30.366 "dma_device_type": 1 00:21:30.366 }, 00:21:30.366 { 00:21:30.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.366 "dma_device_type": 2 00:21:30.366 } 00:21:30.366 ], 00:21:30.366 "driver_specific": {} 00:21:30.366 }' 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:30.366 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:30.624 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:30.624 "name": "BaseBdev4", 00:21:30.624 "aliases": [ 00:21:30.624 "ed01413f-eac9-4188-bcb4-9de81ae53559" 00:21:30.624 ], 00:21:30.624 "product_name": "Malloc disk", 00:21:30.624 "block_size": 512, 00:21:30.624 "num_blocks": 65536, 00:21:30.624 "uuid": "ed01413f-eac9-4188-bcb4-9de81ae53559", 00:21:30.624 "assigned_rate_limits": { 00:21:30.624 "rw_ios_per_sec": 0, 00:21:30.624 "rw_mbytes_per_sec": 0, 00:21:30.624 "r_mbytes_per_sec": 0, 00:21:30.624 "w_mbytes_per_sec": 0 00:21:30.624 }, 00:21:30.624 "claimed": true, 00:21:30.624 "claim_type": "exclusive_write", 00:21:30.624 "zoned": false, 00:21:30.624 "supported_io_types": { 00:21:30.624 "read": true, 00:21:30.624 "write": true, 00:21:30.624 "unmap": true, 00:21:30.624 "flush": true, 00:21:30.625 "reset": true, 00:21:30.625 "nvme_admin": false, 00:21:30.625 "nvme_io": false, 00:21:30.625 "nvme_io_md": false, 00:21:30.625 "write_zeroes": true, 00:21:30.625 "zcopy": true, 00:21:30.625 "get_zone_info": false, 00:21:30.625 "zone_management": false, 00:21:30.625 "zone_append": false, 00:21:30.625 "compare": false, 00:21:30.625 "compare_and_write": false, 00:21:30.625 "abort": true, 00:21:30.625 "seek_hole": false, 00:21:30.625 "seek_data": false, 00:21:30.625 "copy": true, 00:21:30.625 "nvme_iov_md": false 00:21:30.625 }, 00:21:30.625 "memory_domains": [ 00:21:30.625 { 00:21:30.625 "dma_device_id": "system", 00:21:30.625 "dma_device_type": 1 00:21:30.625 }, 00:21:30.625 { 00:21:30.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.625 "dma_device_type": 2 00:21:30.625 } 00:21:30.625 ], 00:21:30.625 "driver_specific": {} 00:21:30.625 }' 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:30.625 15:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:30.883 [2024-07-23 15:15:26.171443] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.883 [2024-07-23 15:15:26.171494] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.883 [2024-07-23 15:15:26.171563] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.883 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.141 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.141 "name": "Existed_Raid", 00:21:31.141 "uuid": "7a579ad7-6a21-4611-8b84-d9f37eaedd72", 00:21:31.141 "strip_size_kb": 64, 00:21:31.141 "state": "offline", 00:21:31.141 "raid_level": "concat", 00:21:31.141 "superblock": true, 00:21:31.141 "num_base_bdevs": 4, 00:21:31.141 "num_base_bdevs_discovered": 3, 00:21:31.141 "num_base_bdevs_operational": 3, 00:21:31.141 "base_bdevs_list": [ 00:21:31.141 { 00:21:31.141 "name": null, 00:21:31.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.141 "is_configured": false, 00:21:31.141 "data_offset": 2048, 00:21:31.141 "data_size": 63488 00:21:31.141 }, 00:21:31.141 { 00:21:31.141 "name": "BaseBdev2", 00:21:31.141 "uuid": "c5c897f7-34eb-4f99-ac6b-c10517d3500d", 00:21:31.141 "is_configured": true, 00:21:31.141 "data_offset": 2048, 00:21:31.141 "data_size": 63488 00:21:31.141 }, 00:21:31.141 { 00:21:31.141 "name": "BaseBdev3", 00:21:31.141 "uuid": "c0c2cacd-7826-493f-9296-6286f896d3bb", 00:21:31.141 "is_configured": true, 00:21:31.141 "data_offset": 2048, 00:21:31.141 "data_size": 63488 00:21:31.141 }, 00:21:31.141 { 00:21:31.141 "name": "BaseBdev4", 00:21:31.141 "uuid": "ed01413f-eac9-4188-bcb4-9de81ae53559", 00:21:31.141 "is_configured": true, 00:21:31.141 "data_offset": 2048, 00:21:31.141 "data_size": 63488 00:21:31.141 } 00:21:31.141 ] 00:21:31.141 }' 00:21:31.141 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.141 15:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.708 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:31.708 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:31.708 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.708 15:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:31.967 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:31.967 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:31.967 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:32.225 [2024-07-23 15:15:27.457466] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.225 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:32.225 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:32.225 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.225 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:32.484 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:32.484 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:32.484 15:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:32.743 [2024-07-23 15:15:28.034745] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:32.743 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:32.743 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:32.743 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.743 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:33.001 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:33.001 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:33.001 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:33.259 [2024-07-23 15:15:28.536033] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:33.259 [2024-07-23 15:15:28.536114] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:21:33.259 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:33.259 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:33.259 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.259 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:33.517 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:33.517 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:33.517 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:33.517 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:33.517 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:33.517 15:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:33.775 BaseBdev2 00:21:33.775 15:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:33.775 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:33.775 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:33.775 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:33.775 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:33.775 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:33.775 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:34.033 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:34.292 [ 00:21:34.292 { 00:21:34.292 "name": "BaseBdev2", 00:21:34.292 "aliases": [ 00:21:34.292 "6e4ac105-fb99-40ee-8da2-de91405278d1" 00:21:34.292 ], 00:21:34.292 "product_name": "Malloc disk", 00:21:34.292 "block_size": 512, 00:21:34.292 "num_blocks": 65536, 00:21:34.292 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:34.292 "assigned_rate_limits": { 00:21:34.292 "rw_ios_per_sec": 0, 00:21:34.292 "rw_mbytes_per_sec": 0, 00:21:34.292 "r_mbytes_per_sec": 0, 00:21:34.292 "w_mbytes_per_sec": 0 00:21:34.292 }, 00:21:34.292 "claimed": false, 00:21:34.292 "zoned": false, 00:21:34.292 "supported_io_types": { 00:21:34.292 "read": true, 00:21:34.292 "write": true, 00:21:34.292 "unmap": true, 00:21:34.292 "flush": true, 00:21:34.292 "reset": true, 00:21:34.292 "nvme_admin": false, 00:21:34.292 "nvme_io": false, 00:21:34.292 "nvme_io_md": false, 00:21:34.292 "write_zeroes": true, 00:21:34.292 "zcopy": true, 00:21:34.292 "get_zone_info": false, 00:21:34.292 "zone_management": false, 00:21:34.292 "zone_append": false, 00:21:34.292 "compare": false, 00:21:34.292 "compare_and_write": false, 00:21:34.292 "abort": true, 00:21:34.292 "seek_hole": false, 00:21:34.292 "seek_data": false, 00:21:34.292 "copy": true, 00:21:34.292 "nvme_iov_md": false 00:21:34.292 }, 00:21:34.292 "memory_domains": [ 00:21:34.292 { 00:21:34.292 "dma_device_id": "system", 00:21:34.292 "dma_device_type": 1 00:21:34.292 }, 00:21:34.292 { 00:21:34.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.292 "dma_device_type": 2 00:21:34.292 } 00:21:34.292 ], 00:21:34.292 "driver_specific": {} 00:21:34.292 } 00:21:34.292 ] 00:21:34.292 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:34.292 15:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:34.292 15:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:34.292 15:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:34.550 BaseBdev3 00:21:34.550 15:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:34.550 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:34.550 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:34.550 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:34.550 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:34.550 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:34.550 15:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:34.808 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:34.808 [ 00:21:34.808 { 00:21:34.808 "name": "BaseBdev3", 00:21:34.808 "aliases": [ 00:21:34.808 "d2bdafae-a9b5-448b-90f7-e486ceb2a557" 00:21:34.808 ], 00:21:34.808 "product_name": "Malloc disk", 00:21:34.808 "block_size": 512, 00:21:34.808 "num_blocks": 65536, 00:21:34.808 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:34.808 "assigned_rate_limits": { 00:21:34.808 "rw_ios_per_sec": 0, 00:21:34.808 "rw_mbytes_per_sec": 0, 00:21:34.808 "r_mbytes_per_sec": 0, 00:21:34.808 "w_mbytes_per_sec": 0 00:21:34.808 }, 00:21:34.808 "claimed": false, 00:21:34.808 "zoned": false, 00:21:34.808 "supported_io_types": { 00:21:34.808 "read": true, 00:21:34.808 "write": true, 00:21:34.808 "unmap": true, 00:21:34.808 "flush": true, 00:21:34.808 "reset": true, 00:21:34.808 "nvme_admin": false, 00:21:34.808 "nvme_io": false, 00:21:34.808 "nvme_io_md": false, 00:21:34.808 "write_zeroes": true, 00:21:34.808 "zcopy": true, 00:21:34.808 "get_zone_info": false, 00:21:34.808 "zone_management": false, 00:21:34.808 "zone_append": false, 00:21:34.808 "compare": false, 00:21:34.808 "compare_and_write": false, 00:21:34.808 "abort": true, 00:21:34.808 "seek_hole": false, 00:21:34.808 "seek_data": false, 00:21:34.808 "copy": true, 00:21:34.808 "nvme_iov_md": false 00:21:34.808 }, 00:21:34.808 "memory_domains": [ 00:21:34.808 { 00:21:34.808 "dma_device_id": "system", 00:21:34.808 "dma_device_type": 1 00:21:34.808 }, 00:21:34.808 { 00:21:34.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.808 "dma_device_type": 2 00:21:34.808 } 00:21:34.808 ], 00:21:34.808 "driver_specific": {} 00:21:34.808 } 00:21:34.808 ] 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:35.066 BaseBdev4 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:35.066 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.324 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:35.583 [ 00:21:35.583 { 00:21:35.583 "name": "BaseBdev4", 00:21:35.583 "aliases": [ 00:21:35.583 "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c" 00:21:35.583 ], 00:21:35.583 "product_name": "Malloc disk", 00:21:35.583 "block_size": 512, 00:21:35.583 "num_blocks": 65536, 00:21:35.583 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:35.583 "assigned_rate_limits": { 00:21:35.583 "rw_ios_per_sec": 0, 00:21:35.583 "rw_mbytes_per_sec": 0, 00:21:35.583 "r_mbytes_per_sec": 0, 00:21:35.583 "w_mbytes_per_sec": 0 00:21:35.583 }, 00:21:35.583 "claimed": false, 00:21:35.583 "zoned": false, 00:21:35.583 "supported_io_types": { 00:21:35.583 "read": true, 00:21:35.583 "write": true, 00:21:35.583 "unmap": true, 00:21:35.583 "flush": true, 00:21:35.583 "reset": true, 00:21:35.583 "nvme_admin": false, 00:21:35.583 "nvme_io": false, 00:21:35.583 "nvme_io_md": false, 00:21:35.583 "write_zeroes": true, 00:21:35.583 "zcopy": true, 00:21:35.583 "get_zone_info": false, 00:21:35.583 "zone_management": false, 00:21:35.583 "zone_append": false, 00:21:35.583 "compare": false, 00:21:35.583 "compare_and_write": false, 00:21:35.583 "abort": true, 00:21:35.583 "seek_hole": false, 00:21:35.583 "seek_data": false, 00:21:35.583 "copy": true, 00:21:35.583 "nvme_iov_md": false 00:21:35.583 }, 00:21:35.583 "memory_domains": [ 00:21:35.583 { 00:21:35.583 "dma_device_id": "system", 00:21:35.583 "dma_device_type": 1 00:21:35.583 }, 00:21:35.583 { 00:21:35.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.583 "dma_device_type": 2 00:21:35.583 } 00:21:35.583 ], 00:21:35.583 "driver_specific": {} 00:21:35.583 } 00:21:35.583 ] 00:21:35.583 15:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:35.583 15:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:35.583 15:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:35.583 15:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:35.841 [2024-07-23 15:15:31.190083] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:35.841 [2024-07-23 15:15:31.190403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:35.841 [2024-07-23 15:15:31.190458] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.841 [2024-07-23 15:15:31.192809] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.841 [2024-07-23 15:15:31.192877] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.841 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.099 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.099 "name": "Existed_Raid", 00:21:36.099 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:36.099 "strip_size_kb": 64, 00:21:36.099 "state": "configuring", 00:21:36.099 "raid_level": "concat", 00:21:36.099 "superblock": true, 00:21:36.099 "num_base_bdevs": 4, 00:21:36.099 "num_base_bdevs_discovered": 3, 00:21:36.099 "num_base_bdevs_operational": 4, 00:21:36.099 "base_bdevs_list": [ 00:21:36.099 { 00:21:36.099 "name": "BaseBdev1", 00:21:36.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.099 "is_configured": false, 00:21:36.099 "data_offset": 0, 00:21:36.099 "data_size": 0 00:21:36.099 }, 00:21:36.099 { 00:21:36.099 "name": "BaseBdev2", 00:21:36.099 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:36.099 "is_configured": true, 00:21:36.099 "data_offset": 2048, 00:21:36.099 "data_size": 63488 00:21:36.099 }, 00:21:36.099 { 00:21:36.099 "name": "BaseBdev3", 00:21:36.099 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:36.099 "is_configured": true, 00:21:36.099 "data_offset": 2048, 00:21:36.099 "data_size": 63488 00:21:36.099 }, 00:21:36.099 { 00:21:36.099 "name": "BaseBdev4", 00:21:36.099 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:36.099 "is_configured": true, 00:21:36.099 "data_offset": 2048, 00:21:36.099 "data_size": 63488 00:21:36.099 } 00:21:36.099 ] 00:21:36.099 }' 00:21:36.099 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.099 15:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.665 15:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:36.665 [2024-07-23 15:15:32.070365] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:36.923 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:36.923 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.924 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.182 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.182 "name": "Existed_Raid", 00:21:37.182 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:37.182 "strip_size_kb": 64, 00:21:37.182 "state": "configuring", 00:21:37.182 "raid_level": "concat", 00:21:37.182 "superblock": true, 00:21:37.182 "num_base_bdevs": 4, 00:21:37.182 "num_base_bdevs_discovered": 2, 00:21:37.182 "num_base_bdevs_operational": 4, 00:21:37.182 "base_bdevs_list": [ 00:21:37.182 { 00:21:37.182 "name": "BaseBdev1", 00:21:37.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.182 "is_configured": false, 00:21:37.182 "data_offset": 0, 00:21:37.182 "data_size": 0 00:21:37.182 }, 00:21:37.182 { 00:21:37.182 "name": null, 00:21:37.182 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:37.182 "is_configured": false, 00:21:37.182 "data_offset": 2048, 00:21:37.182 "data_size": 63488 00:21:37.182 }, 00:21:37.182 { 00:21:37.182 "name": "BaseBdev3", 00:21:37.182 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:37.182 "is_configured": true, 00:21:37.182 "data_offset": 2048, 00:21:37.182 "data_size": 63488 00:21:37.182 }, 00:21:37.182 { 00:21:37.182 "name": "BaseBdev4", 00:21:37.182 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:37.182 "is_configured": true, 00:21:37.182 "data_offset": 2048, 00:21:37.182 "data_size": 63488 00:21:37.182 } 00:21:37.182 ] 00:21:37.182 }' 00:21:37.182 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.182 15:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.441 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:37.441 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.698 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:37.698 15:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:37.955 [2024-07-23 15:15:33.254125] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.956 BaseBdev1 00:21:37.956 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:37.956 15:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:37.956 15:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:37.956 15:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:37.956 15:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:37.956 15:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:37.956 15:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.213 15:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:38.471 [ 00:21:38.471 { 00:21:38.471 "name": "BaseBdev1", 00:21:38.471 "aliases": [ 00:21:38.471 "ba12a648-bf26-41e1-9db9-e0172c4c3d0d" 00:21:38.471 ], 00:21:38.471 "product_name": "Malloc disk", 00:21:38.471 "block_size": 512, 00:21:38.471 "num_blocks": 65536, 00:21:38.471 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:38.471 "assigned_rate_limits": { 00:21:38.471 "rw_ios_per_sec": 0, 00:21:38.471 "rw_mbytes_per_sec": 0, 00:21:38.471 "r_mbytes_per_sec": 0, 00:21:38.471 "w_mbytes_per_sec": 0 00:21:38.471 }, 00:21:38.471 "claimed": true, 00:21:38.471 "claim_type": "exclusive_write", 00:21:38.471 "zoned": false, 00:21:38.471 "supported_io_types": { 00:21:38.471 "read": true, 00:21:38.471 "write": true, 00:21:38.471 "unmap": true, 00:21:38.471 "flush": true, 00:21:38.471 "reset": true, 00:21:38.471 "nvme_admin": false, 00:21:38.471 "nvme_io": false, 00:21:38.471 "nvme_io_md": false, 00:21:38.471 "write_zeroes": true, 00:21:38.471 "zcopy": true, 00:21:38.471 "get_zone_info": false, 00:21:38.471 "zone_management": false, 00:21:38.471 "zone_append": false, 00:21:38.471 "compare": false, 00:21:38.471 "compare_and_write": false, 00:21:38.471 "abort": true, 00:21:38.471 "seek_hole": false, 00:21:38.471 "seek_data": false, 00:21:38.471 "copy": true, 00:21:38.471 "nvme_iov_md": false 00:21:38.471 }, 00:21:38.471 "memory_domains": [ 00:21:38.471 { 00:21:38.471 "dma_device_id": "system", 00:21:38.471 "dma_device_type": 1 00:21:38.471 }, 00:21:38.471 { 00:21:38.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.471 "dma_device_type": 2 00:21:38.471 } 00:21:38.471 ], 00:21:38.471 "driver_specific": {} 00:21:38.471 } 00:21:38.471 ] 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.471 15:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.729 15:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.729 "name": "Existed_Raid", 00:21:38.729 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:38.729 "strip_size_kb": 64, 00:21:38.729 "state": "configuring", 00:21:38.729 "raid_level": "concat", 00:21:38.729 "superblock": true, 00:21:38.729 "num_base_bdevs": 4, 00:21:38.729 "num_base_bdevs_discovered": 3, 00:21:38.729 "num_base_bdevs_operational": 4, 00:21:38.729 "base_bdevs_list": [ 00:21:38.729 { 00:21:38.729 "name": "BaseBdev1", 00:21:38.729 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:38.729 "is_configured": true, 00:21:38.729 "data_offset": 2048, 00:21:38.729 "data_size": 63488 00:21:38.729 }, 00:21:38.729 { 00:21:38.729 "name": null, 00:21:38.729 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:38.729 "is_configured": false, 00:21:38.730 "data_offset": 2048, 00:21:38.730 "data_size": 63488 00:21:38.730 }, 00:21:38.730 { 00:21:38.730 "name": "BaseBdev3", 00:21:38.730 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:38.730 "is_configured": true, 00:21:38.730 "data_offset": 2048, 00:21:38.730 "data_size": 63488 00:21:38.730 }, 00:21:38.730 { 00:21:38.730 "name": "BaseBdev4", 00:21:38.730 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:38.730 "is_configured": true, 00:21:38.730 "data_offset": 2048, 00:21:38.730 "data_size": 63488 00:21:38.730 } 00:21:38.730 ] 00:21:38.730 }' 00:21:38.730 15:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.730 15:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.296 15:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.296 15:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:39.554 15:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:39.554 15:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:39.812 [2024-07-23 15:15:35.002726] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.812 "name": "Existed_Raid", 00:21:39.812 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:39.812 "strip_size_kb": 64, 00:21:39.812 "state": "configuring", 00:21:39.812 "raid_level": "concat", 00:21:39.812 "superblock": true, 00:21:39.812 "num_base_bdevs": 4, 00:21:39.812 "num_base_bdevs_discovered": 2, 00:21:39.812 "num_base_bdevs_operational": 4, 00:21:39.812 "base_bdevs_list": [ 00:21:39.812 { 00:21:39.812 "name": "BaseBdev1", 00:21:39.812 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:39.812 "is_configured": true, 00:21:39.812 "data_offset": 2048, 00:21:39.812 "data_size": 63488 00:21:39.812 }, 00:21:39.812 { 00:21:39.812 "name": null, 00:21:39.812 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:39.812 "is_configured": false, 00:21:39.812 "data_offset": 2048, 00:21:39.812 "data_size": 63488 00:21:39.812 }, 00:21:39.812 { 00:21:39.812 "name": null, 00:21:39.812 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:39.812 "is_configured": false, 00:21:39.812 "data_offset": 2048, 00:21:39.812 "data_size": 63488 00:21:39.812 }, 00:21:39.812 { 00:21:39.812 "name": "BaseBdev4", 00:21:39.812 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:39.812 "is_configured": true, 00:21:39.812 "data_offset": 2048, 00:21:39.812 "data_size": 63488 00:21:39.812 } 00:21:39.812 ] 00:21:39.812 }' 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.812 15:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.379 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.379 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:40.379 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:40.379 15:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:40.637 [2024-07-23 15:15:35.994974] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.637 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.638 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.638 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.638 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.896 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.896 "name": "Existed_Raid", 00:21:40.896 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:40.896 "strip_size_kb": 64, 00:21:40.896 "state": "configuring", 00:21:40.896 "raid_level": "concat", 00:21:40.896 "superblock": true, 00:21:40.896 "num_base_bdevs": 4, 00:21:40.896 "num_base_bdevs_discovered": 3, 00:21:40.896 "num_base_bdevs_operational": 4, 00:21:40.896 "base_bdevs_list": [ 00:21:40.896 { 00:21:40.896 "name": "BaseBdev1", 00:21:40.896 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:40.896 "is_configured": true, 00:21:40.896 "data_offset": 2048, 00:21:40.896 "data_size": 63488 00:21:40.896 }, 00:21:40.896 { 00:21:40.896 "name": null, 00:21:40.896 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:40.896 "is_configured": false, 00:21:40.896 "data_offset": 2048, 00:21:40.896 "data_size": 63488 00:21:40.896 }, 00:21:40.896 { 00:21:40.896 "name": "BaseBdev3", 00:21:40.896 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:40.896 "is_configured": true, 00:21:40.896 "data_offset": 2048, 00:21:40.896 "data_size": 63488 00:21:40.896 }, 00:21:40.896 { 00:21:40.896 "name": "BaseBdev4", 00:21:40.896 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:40.896 "is_configured": true, 00:21:40.896 "data_offset": 2048, 00:21:40.896 "data_size": 63488 00:21:40.896 } 00:21:40.896 ] 00:21:40.896 }' 00:21:40.896 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.896 15:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.461 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.461 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:41.719 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:41.719 15:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:42.037 [2024-07-23 15:15:37.195410] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.037 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.295 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.295 "name": "Existed_Raid", 00:21:42.295 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:42.295 "strip_size_kb": 64, 00:21:42.295 "state": "configuring", 00:21:42.295 "raid_level": "concat", 00:21:42.295 "superblock": true, 00:21:42.295 "num_base_bdevs": 4, 00:21:42.295 "num_base_bdevs_discovered": 2, 00:21:42.295 "num_base_bdevs_operational": 4, 00:21:42.295 "base_bdevs_list": [ 00:21:42.295 { 00:21:42.295 "name": null, 00:21:42.295 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:42.295 "is_configured": false, 00:21:42.295 "data_offset": 2048, 00:21:42.295 "data_size": 63488 00:21:42.295 }, 00:21:42.295 { 00:21:42.295 "name": null, 00:21:42.295 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:42.295 "is_configured": false, 00:21:42.295 "data_offset": 2048, 00:21:42.295 "data_size": 63488 00:21:42.295 }, 00:21:42.295 { 00:21:42.295 "name": "BaseBdev3", 00:21:42.295 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:42.295 "is_configured": true, 00:21:42.295 "data_offset": 2048, 00:21:42.295 "data_size": 63488 00:21:42.295 }, 00:21:42.295 { 00:21:42.295 "name": "BaseBdev4", 00:21:42.295 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:42.295 "is_configured": true, 00:21:42.295 "data_offset": 2048, 00:21:42.295 "data_size": 63488 00:21:42.295 } 00:21:42.295 ] 00:21:42.295 }' 00:21:42.295 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.295 15:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.553 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.553 15:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:42.811 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:42.811 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:43.069 [2024-07-23 15:15:38.430272] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.069 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.070 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.070 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.328 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.328 "name": "Existed_Raid", 00:21:43.328 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:43.328 "strip_size_kb": 64, 00:21:43.328 "state": "configuring", 00:21:43.328 "raid_level": "concat", 00:21:43.328 "superblock": true, 00:21:43.328 "num_base_bdevs": 4, 00:21:43.328 "num_base_bdevs_discovered": 3, 00:21:43.328 "num_base_bdevs_operational": 4, 00:21:43.328 "base_bdevs_list": [ 00:21:43.328 { 00:21:43.328 "name": null, 00:21:43.328 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:43.328 "is_configured": false, 00:21:43.328 "data_offset": 2048, 00:21:43.328 "data_size": 63488 00:21:43.328 }, 00:21:43.328 { 00:21:43.328 "name": "BaseBdev2", 00:21:43.328 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:43.328 "is_configured": true, 00:21:43.328 "data_offset": 2048, 00:21:43.328 "data_size": 63488 00:21:43.328 }, 00:21:43.328 { 00:21:43.328 "name": "BaseBdev3", 00:21:43.328 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:43.328 "is_configured": true, 00:21:43.328 "data_offset": 2048, 00:21:43.328 "data_size": 63488 00:21:43.328 }, 00:21:43.328 { 00:21:43.328 "name": "BaseBdev4", 00:21:43.328 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:43.328 "is_configured": true, 00:21:43.328 "data_offset": 2048, 00:21:43.328 "data_size": 63488 00:21:43.328 } 00:21:43.328 ] 00:21:43.328 }' 00:21:43.328 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.328 15:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.586 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.586 15:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:43.844 15:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:43.844 15:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.844 15:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:44.409 15:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ba12a648-bf26-41e1-9db9-e0172c4c3d0d 00:21:44.409 [2024-07-23 15:15:39.774219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:44.409 [2024-07-23 15:15:39.774418] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:21:44.409 [2024-07-23 15:15:39.774434] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:44.409 [2024-07-23 15:15:39.774520] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:21:44.409 [2024-07-23 15:15:39.774880] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:21:44.409 [2024-07-23 15:15:39.774900] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008180 00:21:44.409 [2024-07-23 15:15:39.775025] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.409 NewBaseBdev 00:21:44.409 15:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:44.409 15:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:44.409 15:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:44.409 15:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:44.409 15:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:44.409 15:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:44.409 15:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:44.666 15:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:44.924 [ 00:21:44.924 { 00:21:44.924 "name": "NewBaseBdev", 00:21:44.924 "aliases": [ 00:21:44.924 "ba12a648-bf26-41e1-9db9-e0172c4c3d0d" 00:21:44.924 ], 00:21:44.924 "product_name": "Malloc disk", 00:21:44.924 "block_size": 512, 00:21:44.924 "num_blocks": 65536, 00:21:44.924 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:44.924 "assigned_rate_limits": { 00:21:44.924 "rw_ios_per_sec": 0, 00:21:44.924 "rw_mbytes_per_sec": 0, 00:21:44.924 "r_mbytes_per_sec": 0, 00:21:44.924 "w_mbytes_per_sec": 0 00:21:44.924 }, 00:21:44.924 "claimed": true, 00:21:44.924 "claim_type": "exclusive_write", 00:21:44.924 "zoned": false, 00:21:44.924 "supported_io_types": { 00:21:44.924 "read": true, 00:21:44.924 "write": true, 00:21:44.924 "unmap": true, 00:21:44.924 "flush": true, 00:21:44.924 "reset": true, 00:21:44.924 "nvme_admin": false, 00:21:44.924 "nvme_io": false, 00:21:44.924 "nvme_io_md": false, 00:21:44.924 "write_zeroes": true, 00:21:44.924 "zcopy": true, 00:21:44.924 "get_zone_info": false, 00:21:44.924 "zone_management": false, 00:21:44.924 "zone_append": false, 00:21:44.924 "compare": false, 00:21:44.924 "compare_and_write": false, 00:21:44.924 "abort": true, 00:21:44.924 "seek_hole": false, 00:21:44.924 "seek_data": false, 00:21:44.924 "copy": true, 00:21:44.924 "nvme_iov_md": false 00:21:44.924 }, 00:21:44.924 "memory_domains": [ 00:21:44.924 { 00:21:44.924 "dma_device_id": "system", 00:21:44.924 "dma_device_type": 1 00:21:44.924 }, 00:21:44.924 { 00:21:44.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.924 "dma_device_type": 2 00:21:44.924 } 00:21:44.924 ], 00:21:44.924 "driver_specific": {} 00:21:44.924 } 00:21:44.924 ] 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.924 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.183 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.183 "name": "Existed_Raid", 00:21:45.183 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:45.183 "strip_size_kb": 64, 00:21:45.183 "state": "online", 00:21:45.183 "raid_level": "concat", 00:21:45.183 "superblock": true, 00:21:45.183 "num_base_bdevs": 4, 00:21:45.183 "num_base_bdevs_discovered": 4, 00:21:45.183 "num_base_bdevs_operational": 4, 00:21:45.183 "base_bdevs_list": [ 00:21:45.183 { 00:21:45.183 "name": "NewBaseBdev", 00:21:45.183 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:45.183 "is_configured": true, 00:21:45.183 "data_offset": 2048, 00:21:45.183 "data_size": 63488 00:21:45.183 }, 00:21:45.183 { 00:21:45.183 "name": "BaseBdev2", 00:21:45.183 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:45.183 "is_configured": true, 00:21:45.183 "data_offset": 2048, 00:21:45.183 "data_size": 63488 00:21:45.183 }, 00:21:45.183 { 00:21:45.183 "name": "BaseBdev3", 00:21:45.183 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:45.183 "is_configured": true, 00:21:45.183 "data_offset": 2048, 00:21:45.183 "data_size": 63488 00:21:45.183 }, 00:21:45.183 { 00:21:45.183 "name": "BaseBdev4", 00:21:45.183 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:45.183 "is_configured": true, 00:21:45.183 "data_offset": 2048, 00:21:45.183 "data_size": 63488 00:21:45.183 } 00:21:45.183 ] 00:21:45.183 }' 00:21:45.183 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.183 15:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.442 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:45.442 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:45.442 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:45.442 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:45.442 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:45.442 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:45.442 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:45.442 15:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:45.700 [2024-07-23 15:15:41.087010] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.700 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:45.700 "name": "Existed_Raid", 00:21:45.700 "aliases": [ 00:21:45.700 "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc" 00:21:45.700 ], 00:21:45.700 "product_name": "Raid Volume", 00:21:45.700 "block_size": 512, 00:21:45.700 "num_blocks": 253952, 00:21:45.700 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:45.700 "assigned_rate_limits": { 00:21:45.700 "rw_ios_per_sec": 0, 00:21:45.700 "rw_mbytes_per_sec": 0, 00:21:45.700 "r_mbytes_per_sec": 0, 00:21:45.700 "w_mbytes_per_sec": 0 00:21:45.700 }, 00:21:45.700 "claimed": false, 00:21:45.700 "zoned": false, 00:21:45.700 "supported_io_types": { 00:21:45.700 "read": true, 00:21:45.700 "write": true, 00:21:45.700 "unmap": true, 00:21:45.700 "flush": true, 00:21:45.700 "reset": true, 00:21:45.700 "nvme_admin": false, 00:21:45.700 "nvme_io": false, 00:21:45.700 "nvme_io_md": false, 00:21:45.700 "write_zeroes": true, 00:21:45.700 "zcopy": false, 00:21:45.700 "get_zone_info": false, 00:21:45.701 "zone_management": false, 00:21:45.701 "zone_append": false, 00:21:45.701 "compare": false, 00:21:45.701 "compare_and_write": false, 00:21:45.701 "abort": false, 00:21:45.701 "seek_hole": false, 00:21:45.701 "seek_data": false, 00:21:45.701 "copy": false, 00:21:45.701 "nvme_iov_md": false 00:21:45.701 }, 00:21:45.701 "memory_domains": [ 00:21:45.701 { 00:21:45.701 "dma_device_id": "system", 00:21:45.701 "dma_device_type": 1 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.701 "dma_device_type": 2 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "dma_device_id": "system", 00:21:45.701 "dma_device_type": 1 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.701 "dma_device_type": 2 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "dma_device_id": "system", 00:21:45.701 "dma_device_type": 1 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.701 "dma_device_type": 2 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "dma_device_id": "system", 00:21:45.701 "dma_device_type": 1 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.701 "dma_device_type": 2 00:21:45.701 } 00:21:45.701 ], 00:21:45.701 "driver_specific": { 00:21:45.701 "raid": { 00:21:45.701 "uuid": "f91c5fa6-9350-42ab-bec9-5b5ab55b52dc", 00:21:45.701 "strip_size_kb": 64, 00:21:45.701 "state": "online", 00:21:45.701 "raid_level": "concat", 00:21:45.701 "superblock": true, 00:21:45.701 "num_base_bdevs": 4, 00:21:45.701 "num_base_bdevs_discovered": 4, 00:21:45.701 "num_base_bdevs_operational": 4, 00:21:45.701 "base_bdevs_list": [ 00:21:45.701 { 00:21:45.701 "name": "NewBaseBdev", 00:21:45.701 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:45.701 "is_configured": true, 00:21:45.701 "data_offset": 2048, 00:21:45.701 "data_size": 63488 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "name": "BaseBdev2", 00:21:45.701 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:45.701 "is_configured": true, 00:21:45.701 "data_offset": 2048, 00:21:45.701 "data_size": 63488 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "name": "BaseBdev3", 00:21:45.701 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:45.701 "is_configured": true, 00:21:45.701 "data_offset": 2048, 00:21:45.701 "data_size": 63488 00:21:45.701 }, 00:21:45.701 { 00:21:45.701 "name": "BaseBdev4", 00:21:45.701 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:45.701 "is_configured": true, 00:21:45.701 "data_offset": 2048, 00:21:45.701 "data_size": 63488 00:21:45.701 } 00:21:45.701 ] 00:21:45.701 } 00:21:45.701 } 00:21:45.701 }' 00:21:45.701 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:45.701 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:45.701 BaseBdev2 00:21:45.701 BaseBdev3 00:21:45.701 BaseBdev4' 00:21:45.701 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:45.959 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:45.959 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:46.217 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:46.217 "name": "NewBaseBdev", 00:21:46.217 "aliases": [ 00:21:46.217 "ba12a648-bf26-41e1-9db9-e0172c4c3d0d" 00:21:46.217 ], 00:21:46.217 "product_name": "Malloc disk", 00:21:46.217 "block_size": 512, 00:21:46.217 "num_blocks": 65536, 00:21:46.217 "uuid": "ba12a648-bf26-41e1-9db9-e0172c4c3d0d", 00:21:46.217 "assigned_rate_limits": { 00:21:46.217 "rw_ios_per_sec": 0, 00:21:46.217 "rw_mbytes_per_sec": 0, 00:21:46.217 "r_mbytes_per_sec": 0, 00:21:46.217 "w_mbytes_per_sec": 0 00:21:46.217 }, 00:21:46.217 "claimed": true, 00:21:46.217 "claim_type": "exclusive_write", 00:21:46.217 "zoned": false, 00:21:46.217 "supported_io_types": { 00:21:46.217 "read": true, 00:21:46.217 "write": true, 00:21:46.217 "unmap": true, 00:21:46.217 "flush": true, 00:21:46.217 "reset": true, 00:21:46.217 "nvme_admin": false, 00:21:46.217 "nvme_io": false, 00:21:46.217 "nvme_io_md": false, 00:21:46.217 "write_zeroes": true, 00:21:46.217 "zcopy": true, 00:21:46.217 "get_zone_info": false, 00:21:46.217 "zone_management": false, 00:21:46.217 "zone_append": false, 00:21:46.217 "compare": false, 00:21:46.217 "compare_and_write": false, 00:21:46.217 "abort": true, 00:21:46.217 "seek_hole": false, 00:21:46.217 "seek_data": false, 00:21:46.217 "copy": true, 00:21:46.217 "nvme_iov_md": false 00:21:46.217 }, 00:21:46.217 "memory_domains": [ 00:21:46.217 { 00:21:46.217 "dma_device_id": "system", 00:21:46.217 "dma_device_type": 1 00:21:46.217 }, 00:21:46.217 { 00:21:46.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.217 "dma_device_type": 2 00:21:46.217 } 00:21:46.217 ], 00:21:46.217 "driver_specific": {} 00:21:46.217 }' 00:21:46.217 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:46.218 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:46.476 "name": "BaseBdev2", 00:21:46.476 "aliases": [ 00:21:46.476 "6e4ac105-fb99-40ee-8da2-de91405278d1" 00:21:46.476 ], 00:21:46.476 "product_name": "Malloc disk", 00:21:46.476 "block_size": 512, 00:21:46.476 "num_blocks": 65536, 00:21:46.476 "uuid": "6e4ac105-fb99-40ee-8da2-de91405278d1", 00:21:46.476 "assigned_rate_limits": { 00:21:46.476 "rw_ios_per_sec": 0, 00:21:46.476 "rw_mbytes_per_sec": 0, 00:21:46.476 "r_mbytes_per_sec": 0, 00:21:46.476 "w_mbytes_per_sec": 0 00:21:46.476 }, 00:21:46.476 "claimed": true, 00:21:46.476 "claim_type": "exclusive_write", 00:21:46.476 "zoned": false, 00:21:46.476 "supported_io_types": { 00:21:46.476 "read": true, 00:21:46.476 "write": true, 00:21:46.476 "unmap": true, 00:21:46.476 "flush": true, 00:21:46.476 "reset": true, 00:21:46.476 "nvme_admin": false, 00:21:46.476 "nvme_io": false, 00:21:46.476 "nvme_io_md": false, 00:21:46.476 "write_zeroes": true, 00:21:46.476 "zcopy": true, 00:21:46.476 "get_zone_info": false, 00:21:46.476 "zone_management": false, 00:21:46.476 "zone_append": false, 00:21:46.476 "compare": false, 00:21:46.476 "compare_and_write": false, 00:21:46.476 "abort": true, 00:21:46.476 "seek_hole": false, 00:21:46.476 "seek_data": false, 00:21:46.476 "copy": true, 00:21:46.476 "nvme_iov_md": false 00:21:46.476 }, 00:21:46.476 "memory_domains": [ 00:21:46.476 { 00:21:46.476 "dma_device_id": "system", 00:21:46.476 "dma_device_type": 1 00:21:46.476 }, 00:21:46.476 { 00:21:46.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.476 "dma_device_type": 2 00:21:46.476 } 00:21:46.476 ], 00:21:46.476 "driver_specific": {} 00:21:46.476 }' 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:46.476 15:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:46.734 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:46.734 "name": "BaseBdev3", 00:21:46.734 "aliases": [ 00:21:46.734 "d2bdafae-a9b5-448b-90f7-e486ceb2a557" 00:21:46.734 ], 00:21:46.734 "product_name": "Malloc disk", 00:21:46.734 "block_size": 512, 00:21:46.734 "num_blocks": 65536, 00:21:46.734 "uuid": "d2bdafae-a9b5-448b-90f7-e486ceb2a557", 00:21:46.734 "assigned_rate_limits": { 00:21:46.734 "rw_ios_per_sec": 0, 00:21:46.734 "rw_mbytes_per_sec": 0, 00:21:46.734 "r_mbytes_per_sec": 0, 00:21:46.734 "w_mbytes_per_sec": 0 00:21:46.734 }, 00:21:46.734 "claimed": true, 00:21:46.734 "claim_type": "exclusive_write", 00:21:46.734 "zoned": false, 00:21:46.734 "supported_io_types": { 00:21:46.734 "read": true, 00:21:46.734 "write": true, 00:21:46.734 "unmap": true, 00:21:46.734 "flush": true, 00:21:46.734 "reset": true, 00:21:46.734 "nvme_admin": false, 00:21:46.734 "nvme_io": false, 00:21:46.734 "nvme_io_md": false, 00:21:46.734 "write_zeroes": true, 00:21:46.734 "zcopy": true, 00:21:46.734 "get_zone_info": false, 00:21:46.734 "zone_management": false, 00:21:46.734 "zone_append": false, 00:21:46.734 "compare": false, 00:21:46.734 "compare_and_write": false, 00:21:46.734 "abort": true, 00:21:46.734 "seek_hole": false, 00:21:46.734 "seek_data": false, 00:21:46.734 "copy": true, 00:21:46.734 "nvme_iov_md": false 00:21:46.734 }, 00:21:46.734 "memory_domains": [ 00:21:46.734 { 00:21:46.734 "dma_device_id": "system", 00:21:46.734 "dma_device_type": 1 00:21:46.734 }, 00:21:46.734 { 00:21:46.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.734 "dma_device_type": 2 00:21:46.734 } 00:21:46.734 ], 00:21:46.734 "driver_specific": {} 00:21:46.734 }' 00:21:46.734 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.734 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.734 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:46.734 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.734 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:46.991 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:46.991 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.991 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:46.991 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:46.991 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.991 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:46.992 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:46.992 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:46.992 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:46.992 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.249 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:47.249 "name": "BaseBdev4", 00:21:47.249 "aliases": [ 00:21:47.249 "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c" 00:21:47.249 ], 00:21:47.249 "product_name": "Malloc disk", 00:21:47.249 "block_size": 512, 00:21:47.249 "num_blocks": 65536, 00:21:47.249 "uuid": "58a66b85-7f6a-4d5f-b2ad-bd6aa5b0635c", 00:21:47.249 "assigned_rate_limits": { 00:21:47.249 "rw_ios_per_sec": 0, 00:21:47.249 "rw_mbytes_per_sec": 0, 00:21:47.249 "r_mbytes_per_sec": 0, 00:21:47.249 "w_mbytes_per_sec": 0 00:21:47.249 }, 00:21:47.249 "claimed": true, 00:21:47.249 "claim_type": "exclusive_write", 00:21:47.249 "zoned": false, 00:21:47.249 "supported_io_types": { 00:21:47.249 "read": true, 00:21:47.249 "write": true, 00:21:47.249 "unmap": true, 00:21:47.249 "flush": true, 00:21:47.249 "reset": true, 00:21:47.249 "nvme_admin": false, 00:21:47.249 "nvme_io": false, 00:21:47.249 "nvme_io_md": false, 00:21:47.249 "write_zeroes": true, 00:21:47.249 "zcopy": true, 00:21:47.249 "get_zone_info": false, 00:21:47.249 "zone_management": false, 00:21:47.249 "zone_append": false, 00:21:47.249 "compare": false, 00:21:47.249 "compare_and_write": false, 00:21:47.249 "abort": true, 00:21:47.249 "seek_hole": false, 00:21:47.249 "seek_data": false, 00:21:47.249 "copy": true, 00:21:47.249 "nvme_iov_md": false 00:21:47.249 }, 00:21:47.249 "memory_domains": [ 00:21:47.249 { 00:21:47.249 "dma_device_id": "system", 00:21:47.249 "dma_device_type": 1 00:21:47.249 }, 00:21:47.249 { 00:21:47.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.249 "dma_device_type": 2 00:21:47.249 } 00:21:47.249 ], 00:21:47.250 "driver_specific": {} 00:21:47.250 }' 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:47.250 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:47.507 [2024-07-23 15:15:42.831078] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:47.507 [2024-07-23 15:15:42.831121] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:47.507 [2024-07-23 15:15:42.831194] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:47.507 [2024-07-23 15:15:42.831266] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:47.507 [2024-07-23 15:15:42.831279] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name Existed_Raid, state offline 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 102554 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 102554 ']' 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 102554 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102554 00:21:47.507 killing process with pid 102554 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102554' 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 102554 00:21:47.507 [2024-07-23 15:15:42.894087] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:47.507 15:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 102554 00:21:47.764 [2024-07-23 15:15:42.943457] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:47.764 15:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:47.764 ************************************ 00:21:47.764 END TEST raid_state_function_test_sb 00:21:47.764 ************************************ 00:21:47.764 00:21:47.764 real 0m26.565s 00:21:47.764 user 0m46.720s 00:21:47.764 sys 0m5.576s 00:21:47.764 15:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.764 15:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.022 15:15:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:48.022 15:15:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:21:48.022 15:15:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:48.022 15:15:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.022 15:15:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.022 ************************************ 00:21:48.022 START TEST raid_superblock_test 00:21:48.022 ************************************ 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=103537 00:21:48.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 103537 /var/tmp/spdk-raid.sock 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 103537 ']' 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.022 15:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.022 [2024-07-23 15:15:43.331047] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:21:48.022 [2024-07-23 15:15:43.331259] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103537 ] 00:21:48.280 [2024-07-23 15:15:43.484743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.280 [2024-07-23 15:15:43.540094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.280 [2024-07-23 15:15:43.592826] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:49.213 malloc1 00:21:49.213 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:49.472 [2024-07-23 15:15:44.723033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:49.472 [2024-07-23 15:15:44.723291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.472 [2024-07-23 15:15:44.723466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:21:49.472 [2024-07-23 15:15:44.723588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.472 [2024-07-23 15:15:44.726153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.472 [2024-07-23 15:15:44.726299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:49.472 pt1 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.472 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:49.731 malloc2 00:21:49.731 15:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:49.731 [2024-07-23 15:15:45.156844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:49.731 [2024-07-23 15:15:45.156928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.731 [2024-07-23 15:15:45.156951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:21:49.731 [2024-07-23 15:15:45.156968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.731 [2024-07-23 15:15:45.159539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.731 [2024-07-23 15:15:45.159587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:49.989 pt2 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:49.989 malloc3 00:21:49.989 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:50.248 [2024-07-23 15:15:45.542448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:50.248 [2024-07-23 15:15:45.542733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.248 [2024-07-23 15:15:45.542808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:21:50.248 [2024-07-23 15:15:45.542903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.248 [2024-07-23 15:15:45.545525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.248 [2024-07-23 15:15:45.545682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:50.248 pt3 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:50.248 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:50.507 malloc4 00:21:50.507 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:50.507 [2024-07-23 15:15:45.896081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:50.507 [2024-07-23 15:15:45.896165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.507 [2024-07-23 15:15:45.896190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:21:50.507 [2024-07-23 15:15:45.896205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.507 [2024-07-23 15:15:45.898736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.507 [2024-07-23 15:15:45.898785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:50.507 pt4 00:21:50.507 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:50.507 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:50.507 15:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:50.765 [2024-07-23 15:15:46.080201] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:50.765 [2024-07-23 15:15:46.082556] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:50.765 [2024-07-23 15:15:46.082730] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:50.765 [2024-07-23 15:15:46.082835] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:50.765 [2024-07-23 15:15:46.083104] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:21:50.765 [2024-07-23 15:15:46.083220] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:50.765 [2024-07-23 15:15:46.083386] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:21:50.765 [2024-07-23 15:15:46.083774] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:21:50.765 [2024-07-23 15:15:46.083929] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:21:50.765 [2024-07-23 15:15:46.084182] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.765 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:50.765 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:50.765 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.766 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.024 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.024 "name": "raid_bdev1", 00:21:51.024 "uuid": "c0915ed3-636d-4607-b2f9-5ebf4ef4697d", 00:21:51.024 "strip_size_kb": 64, 00:21:51.024 "state": "online", 00:21:51.024 "raid_level": "concat", 00:21:51.024 "superblock": true, 00:21:51.024 "num_base_bdevs": 4, 00:21:51.024 "num_base_bdevs_discovered": 4, 00:21:51.024 "num_base_bdevs_operational": 4, 00:21:51.024 "base_bdevs_list": [ 00:21:51.024 { 00:21:51.024 "name": "pt1", 00:21:51.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:51.024 "is_configured": true, 00:21:51.024 "data_offset": 2048, 00:21:51.024 "data_size": 63488 00:21:51.024 }, 00:21:51.024 { 00:21:51.024 "name": "pt2", 00:21:51.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:51.024 "is_configured": true, 00:21:51.024 "data_offset": 2048, 00:21:51.024 "data_size": 63488 00:21:51.024 }, 00:21:51.024 { 00:21:51.024 "name": "pt3", 00:21:51.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:51.024 "is_configured": true, 00:21:51.024 "data_offset": 2048, 00:21:51.024 "data_size": 63488 00:21:51.024 }, 00:21:51.024 { 00:21:51.024 "name": "pt4", 00:21:51.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:51.024 "is_configured": true, 00:21:51.024 "data_offset": 2048, 00:21:51.024 "data_size": 63488 00:21:51.024 } 00:21:51.024 ] 00:21:51.024 }' 00:21:51.024 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.024 15:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.282 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:51.282 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:51.282 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:51.282 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:51.282 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:51.282 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:51.282 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:51.283 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:51.541 [2024-07-23 15:15:46.800662] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:51.541 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:51.541 "name": "raid_bdev1", 00:21:51.541 "aliases": [ 00:21:51.541 "c0915ed3-636d-4607-b2f9-5ebf4ef4697d" 00:21:51.541 ], 00:21:51.541 "product_name": "Raid Volume", 00:21:51.541 "block_size": 512, 00:21:51.541 "num_blocks": 253952, 00:21:51.541 "uuid": "c0915ed3-636d-4607-b2f9-5ebf4ef4697d", 00:21:51.541 "assigned_rate_limits": { 00:21:51.541 "rw_ios_per_sec": 0, 00:21:51.541 "rw_mbytes_per_sec": 0, 00:21:51.541 "r_mbytes_per_sec": 0, 00:21:51.541 "w_mbytes_per_sec": 0 00:21:51.541 }, 00:21:51.541 "claimed": false, 00:21:51.541 "zoned": false, 00:21:51.541 "supported_io_types": { 00:21:51.541 "read": true, 00:21:51.541 "write": true, 00:21:51.541 "unmap": true, 00:21:51.541 "flush": true, 00:21:51.541 "reset": true, 00:21:51.541 "nvme_admin": false, 00:21:51.541 "nvme_io": false, 00:21:51.541 "nvme_io_md": false, 00:21:51.541 "write_zeroes": true, 00:21:51.541 "zcopy": false, 00:21:51.541 "get_zone_info": false, 00:21:51.541 "zone_management": false, 00:21:51.541 "zone_append": false, 00:21:51.541 "compare": false, 00:21:51.541 "compare_and_write": false, 00:21:51.541 "abort": false, 00:21:51.541 "seek_hole": false, 00:21:51.541 "seek_data": false, 00:21:51.541 "copy": false, 00:21:51.541 "nvme_iov_md": false 00:21:51.541 }, 00:21:51.541 "memory_domains": [ 00:21:51.541 { 00:21:51.541 "dma_device_id": "system", 00:21:51.541 "dma_device_type": 1 00:21:51.541 }, 00:21:51.541 { 00:21:51.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.541 "dma_device_type": 2 00:21:51.541 }, 00:21:51.541 { 00:21:51.541 "dma_device_id": "system", 00:21:51.541 "dma_device_type": 1 00:21:51.541 }, 00:21:51.541 { 00:21:51.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.541 "dma_device_type": 2 00:21:51.541 }, 00:21:51.541 { 00:21:51.541 "dma_device_id": "system", 00:21:51.541 "dma_device_type": 1 00:21:51.541 }, 00:21:51.541 { 00:21:51.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.541 "dma_device_type": 2 00:21:51.541 }, 00:21:51.541 { 00:21:51.541 "dma_device_id": "system", 00:21:51.541 "dma_device_type": 1 00:21:51.541 }, 00:21:51.541 { 00:21:51.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.541 "dma_device_type": 2 00:21:51.541 } 00:21:51.541 ], 00:21:51.541 "driver_specific": { 00:21:51.541 "raid": { 00:21:51.541 "uuid": "c0915ed3-636d-4607-b2f9-5ebf4ef4697d", 00:21:51.541 "strip_size_kb": 64, 00:21:51.541 "state": "online", 00:21:51.541 "raid_level": "concat", 00:21:51.541 "superblock": true, 00:21:51.541 "num_base_bdevs": 4, 00:21:51.541 "num_base_bdevs_discovered": 4, 00:21:51.541 "num_base_bdevs_operational": 4, 00:21:51.541 "base_bdevs_list": [ 00:21:51.541 { 00:21:51.541 "name": "pt1", 00:21:51.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:51.541 "is_configured": true, 00:21:51.541 "data_offset": 2048, 00:21:51.541 "data_size": 63488 00:21:51.541 }, 00:21:51.541 { 00:21:51.541 "name": "pt2", 00:21:51.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:51.541 "is_configured": true, 00:21:51.541 "data_offset": 2048, 00:21:51.541 "data_size": 63488 00:21:51.542 }, 00:21:51.542 { 00:21:51.542 "name": "pt3", 00:21:51.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:51.542 "is_configured": true, 00:21:51.542 "data_offset": 2048, 00:21:51.542 "data_size": 63488 00:21:51.542 }, 00:21:51.542 { 00:21:51.542 "name": "pt4", 00:21:51.542 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:51.542 "is_configured": true, 00:21:51.542 "data_offset": 2048, 00:21:51.542 "data_size": 63488 00:21:51.542 } 00:21:51.542 ] 00:21:51.542 } 00:21:51.542 } 00:21:51.542 }' 00:21:51.542 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:51.542 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:51.542 pt2 00:21:51.542 pt3 00:21:51.542 pt4' 00:21:51.542 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.542 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:51.542 15:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:51.801 "name": "pt1", 00:21:51.801 "aliases": [ 00:21:51.801 "00000000-0000-0000-0000-000000000001" 00:21:51.801 ], 00:21:51.801 "product_name": "passthru", 00:21:51.801 "block_size": 512, 00:21:51.801 "num_blocks": 65536, 00:21:51.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:51.801 "assigned_rate_limits": { 00:21:51.801 "rw_ios_per_sec": 0, 00:21:51.801 "rw_mbytes_per_sec": 0, 00:21:51.801 "r_mbytes_per_sec": 0, 00:21:51.801 "w_mbytes_per_sec": 0 00:21:51.801 }, 00:21:51.801 "claimed": true, 00:21:51.801 "claim_type": "exclusive_write", 00:21:51.801 "zoned": false, 00:21:51.801 "supported_io_types": { 00:21:51.801 "read": true, 00:21:51.801 "write": true, 00:21:51.801 "unmap": true, 00:21:51.801 "flush": true, 00:21:51.801 "reset": true, 00:21:51.801 "nvme_admin": false, 00:21:51.801 "nvme_io": false, 00:21:51.801 "nvme_io_md": false, 00:21:51.801 "write_zeroes": true, 00:21:51.801 "zcopy": true, 00:21:51.801 "get_zone_info": false, 00:21:51.801 "zone_management": false, 00:21:51.801 "zone_append": false, 00:21:51.801 "compare": false, 00:21:51.801 "compare_and_write": false, 00:21:51.801 "abort": true, 00:21:51.801 "seek_hole": false, 00:21:51.801 "seek_data": false, 00:21:51.801 "copy": true, 00:21:51.801 "nvme_iov_md": false 00:21:51.801 }, 00:21:51.801 "memory_domains": [ 00:21:51.801 { 00:21:51.801 "dma_device_id": "system", 00:21:51.801 "dma_device_type": 1 00:21:51.801 }, 00:21:51.801 { 00:21:51.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.801 "dma_device_type": 2 00:21:51.801 } 00:21:51.801 ], 00:21:51.801 "driver_specific": { 00:21:51.801 "passthru": { 00:21:51.801 "name": "pt1", 00:21:51.801 "base_bdev_name": "malloc1" 00:21:51.801 } 00:21:51.801 } 00:21:51.801 }' 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:51.801 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.060 "name": "pt2", 00:21:52.060 "aliases": [ 00:21:52.060 "00000000-0000-0000-0000-000000000002" 00:21:52.060 ], 00:21:52.060 "product_name": "passthru", 00:21:52.060 "block_size": 512, 00:21:52.060 "num_blocks": 65536, 00:21:52.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:52.060 "assigned_rate_limits": { 00:21:52.060 "rw_ios_per_sec": 0, 00:21:52.060 "rw_mbytes_per_sec": 0, 00:21:52.060 "r_mbytes_per_sec": 0, 00:21:52.060 "w_mbytes_per_sec": 0 00:21:52.060 }, 00:21:52.060 "claimed": true, 00:21:52.060 "claim_type": "exclusive_write", 00:21:52.060 "zoned": false, 00:21:52.060 "supported_io_types": { 00:21:52.060 "read": true, 00:21:52.060 "write": true, 00:21:52.060 "unmap": true, 00:21:52.060 "flush": true, 00:21:52.060 "reset": true, 00:21:52.060 "nvme_admin": false, 00:21:52.060 "nvme_io": false, 00:21:52.060 "nvme_io_md": false, 00:21:52.060 "write_zeroes": true, 00:21:52.060 "zcopy": true, 00:21:52.060 "get_zone_info": false, 00:21:52.060 "zone_management": false, 00:21:52.060 "zone_append": false, 00:21:52.060 "compare": false, 00:21:52.060 "compare_and_write": false, 00:21:52.060 "abort": true, 00:21:52.060 "seek_hole": false, 00:21:52.060 "seek_data": false, 00:21:52.060 "copy": true, 00:21:52.060 "nvme_iov_md": false 00:21:52.060 }, 00:21:52.060 "memory_domains": [ 00:21:52.060 { 00:21:52.060 "dma_device_id": "system", 00:21:52.060 "dma_device_type": 1 00:21:52.060 }, 00:21:52.060 { 00:21:52.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.060 "dma_device_type": 2 00:21:52.060 } 00:21:52.060 ], 00:21:52.060 "driver_specific": { 00:21:52.060 "passthru": { 00:21:52.060 "name": "pt2", 00:21:52.060 "base_bdev_name": "malloc2" 00:21:52.060 } 00:21:52.060 } 00:21:52.060 }' 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:52.060 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.319 "name": "pt3", 00:21:52.319 "aliases": [ 00:21:52.319 "00000000-0000-0000-0000-000000000003" 00:21:52.319 ], 00:21:52.319 "product_name": "passthru", 00:21:52.319 "block_size": 512, 00:21:52.319 "num_blocks": 65536, 00:21:52.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:52.319 "assigned_rate_limits": { 00:21:52.319 "rw_ios_per_sec": 0, 00:21:52.319 "rw_mbytes_per_sec": 0, 00:21:52.319 "r_mbytes_per_sec": 0, 00:21:52.319 "w_mbytes_per_sec": 0 00:21:52.319 }, 00:21:52.319 "claimed": true, 00:21:52.319 "claim_type": "exclusive_write", 00:21:52.319 "zoned": false, 00:21:52.319 "supported_io_types": { 00:21:52.319 "read": true, 00:21:52.319 "write": true, 00:21:52.319 "unmap": true, 00:21:52.319 "flush": true, 00:21:52.319 "reset": true, 00:21:52.319 "nvme_admin": false, 00:21:52.319 "nvme_io": false, 00:21:52.319 "nvme_io_md": false, 00:21:52.319 "write_zeroes": true, 00:21:52.319 "zcopy": true, 00:21:52.319 "get_zone_info": false, 00:21:52.319 "zone_management": false, 00:21:52.319 "zone_append": false, 00:21:52.319 "compare": false, 00:21:52.319 "compare_and_write": false, 00:21:52.319 "abort": true, 00:21:52.319 "seek_hole": false, 00:21:52.319 "seek_data": false, 00:21:52.319 "copy": true, 00:21:52.319 "nvme_iov_md": false 00:21:52.319 }, 00:21:52.319 "memory_domains": [ 00:21:52.319 { 00:21:52.319 "dma_device_id": "system", 00:21:52.319 "dma_device_type": 1 00:21:52.319 }, 00:21:52.319 { 00:21:52.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.319 "dma_device_type": 2 00:21:52.319 } 00:21:52.319 ], 00:21:52.319 "driver_specific": { 00:21:52.319 "passthru": { 00:21:52.319 "name": "pt3", 00:21:52.319 "base_bdev_name": "malloc3" 00:21:52.319 } 00:21:52.319 } 00:21:52.319 }' 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.319 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.579 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.579 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:52.579 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:52.579 15:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.838 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.838 "name": "pt4", 00:21:52.838 "aliases": [ 00:21:52.838 "00000000-0000-0000-0000-000000000004" 00:21:52.838 ], 00:21:52.838 "product_name": "passthru", 00:21:52.838 "block_size": 512, 00:21:52.838 "num_blocks": 65536, 00:21:52.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:52.838 "assigned_rate_limits": { 00:21:52.838 "rw_ios_per_sec": 0, 00:21:52.838 "rw_mbytes_per_sec": 0, 00:21:52.838 "r_mbytes_per_sec": 0, 00:21:52.838 "w_mbytes_per_sec": 0 00:21:52.838 }, 00:21:52.838 "claimed": true, 00:21:52.838 "claim_type": "exclusive_write", 00:21:52.838 "zoned": false, 00:21:52.838 "supported_io_types": { 00:21:52.838 "read": true, 00:21:52.838 "write": true, 00:21:52.838 "unmap": true, 00:21:52.838 "flush": true, 00:21:52.838 "reset": true, 00:21:52.838 "nvme_admin": false, 00:21:52.838 "nvme_io": false, 00:21:52.838 "nvme_io_md": false, 00:21:52.839 "write_zeroes": true, 00:21:52.839 "zcopy": true, 00:21:52.839 "get_zone_info": false, 00:21:52.839 "zone_management": false, 00:21:52.839 "zone_append": false, 00:21:52.839 "compare": false, 00:21:52.839 "compare_and_write": false, 00:21:52.839 "abort": true, 00:21:52.839 "seek_hole": false, 00:21:52.839 "seek_data": false, 00:21:52.839 "copy": true, 00:21:52.839 "nvme_iov_md": false 00:21:52.839 }, 00:21:52.839 "memory_domains": [ 00:21:52.839 { 00:21:52.839 "dma_device_id": "system", 00:21:52.839 "dma_device_type": 1 00:21:52.839 }, 00:21:52.839 { 00:21:52.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.839 "dma_device_type": 2 00:21:52.839 } 00:21:52.839 ], 00:21:52.839 "driver_specific": { 00:21:52.839 "passthru": { 00:21:52.839 "name": "pt4", 00:21:52.839 "base_bdev_name": "malloc4" 00:21:52.839 } 00:21:52.839 } 00:21:52.839 }' 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:52.839 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:53.096 [2024-07-23 15:15:48.348993] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:53.096 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c0915ed3-636d-4607-b2f9-5ebf4ef4697d 00:21:53.096 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c0915ed3-636d-4607-b2f9-5ebf4ef4697d ']' 00:21:53.096 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:53.354 [2024-07-23 15:15:48.612707] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:53.355 [2024-07-23 15:15:48.612754] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:53.355 [2024-07-23 15:15:48.613097] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:53.355 [2024-07-23 15:15:48.613191] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:53.355 [2024-07-23 15:15:48.613212] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:21:53.355 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:53.355 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.613 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:53.613 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:53.613 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:53.613 15:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:53.871 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:53.871 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:54.129 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:54.129 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:54.390 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:54.391 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:54.391 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:54.391 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:54.648 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:54.648 15:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:54.648 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:54.648 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:54.649 15:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:54.906 [2024-07-23 15:15:50.169029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:54.906 [2024-07-23 15:15:50.171358] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:54.906 [2024-07-23 15:15:50.171410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:54.906 [2024-07-23 15:15:50.171441] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:54.906 [2024-07-23 15:15:50.171491] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:54.906 [2024-07-23 15:15:50.171544] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:54.906 [2024-07-23 15:15:50.171568] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:54.906 [2024-07-23 15:15:50.171588] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:54.906 [2024-07-23 15:15:50.171607] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:54.906 [2024-07-23 15:15:50.171618] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state configuring 00:21:54.906 request: 00:21:54.906 { 00:21:54.906 "name": "raid_bdev1", 00:21:54.906 "raid_level": "concat", 00:21:54.906 "base_bdevs": [ 00:21:54.906 "malloc1", 00:21:54.906 "malloc2", 00:21:54.906 "malloc3", 00:21:54.906 "malloc4" 00:21:54.906 ], 00:21:54.906 "strip_size_kb": 64, 00:21:54.907 "superblock": false, 00:21:54.907 "method": "bdev_raid_create", 00:21:54.907 "req_id": 1 00:21:54.907 } 00:21:54.907 Got JSON-RPC error response 00:21:54.907 response: 00:21:54.907 { 00:21:54.907 "code": -17, 00:21:54.907 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:54.907 } 00:21:54.907 15:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:54.907 15:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.907 15:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.907 15:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.907 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.907 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:55.164 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:55.164 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:55.164 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:55.164 [2024-07-23 15:15:50.589083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:55.164 [2024-07-23 15:15:50.589346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.164 [2024-07-23 15:15:50.589380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:21:55.164 [2024-07-23 15:15:50.589392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.164 [2024-07-23 15:15:50.591871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.164 [2024-07-23 15:15:50.591909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:55.164 [2024-07-23 15:15:50.591988] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:55.164 [2024-07-23 15:15:50.592044] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:55.422 pt1 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.422 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.680 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.680 "name": "raid_bdev1", 00:21:55.680 "uuid": "c0915ed3-636d-4607-b2f9-5ebf4ef4697d", 00:21:55.680 "strip_size_kb": 64, 00:21:55.680 "state": "configuring", 00:21:55.680 "raid_level": "concat", 00:21:55.680 "superblock": true, 00:21:55.680 "num_base_bdevs": 4, 00:21:55.680 "num_base_bdevs_discovered": 1, 00:21:55.680 "num_base_bdevs_operational": 4, 00:21:55.680 "base_bdevs_list": [ 00:21:55.680 { 00:21:55.680 "name": "pt1", 00:21:55.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.680 "is_configured": true, 00:21:55.680 "data_offset": 2048, 00:21:55.680 "data_size": 63488 00:21:55.680 }, 00:21:55.680 { 00:21:55.680 "name": null, 00:21:55.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.680 "is_configured": false, 00:21:55.680 "data_offset": 2048, 00:21:55.680 "data_size": 63488 00:21:55.680 }, 00:21:55.680 { 00:21:55.680 "name": null, 00:21:55.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:55.680 "is_configured": false, 00:21:55.680 "data_offset": 2048, 00:21:55.680 "data_size": 63488 00:21:55.680 }, 00:21:55.680 { 00:21:55.680 "name": null, 00:21:55.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:55.680 "is_configured": false, 00:21:55.680 "data_offset": 2048, 00:21:55.680 "data_size": 63488 00:21:55.680 } 00:21:55.680 ] 00:21:55.680 }' 00:21:55.680 15:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.680 15:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.938 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:21:55.938 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:56.196 [2024-07-23 15:15:51.369229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:56.196 [2024-07-23 15:15:51.369317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.196 [2024-07-23 15:15:51.369347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:21:56.196 [2024-07-23 15:15:51.369361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.196 [2024-07-23 15:15:51.369837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.196 [2024-07-23 15:15:51.369859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:56.196 [2024-07-23 15:15:51.369940] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:56.196 [2024-07-23 15:15:51.369966] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:56.196 pt2 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:56.196 [2024-07-23 15:15:51.549279] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.196 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.197 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.197 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.455 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.455 "name": "raid_bdev1", 00:21:56.455 "uuid": "c0915ed3-636d-4607-b2f9-5ebf4ef4697d", 00:21:56.455 "strip_size_kb": 64, 00:21:56.455 "state": "configuring", 00:21:56.455 "raid_level": "concat", 00:21:56.455 "superblock": true, 00:21:56.455 "num_base_bdevs": 4, 00:21:56.455 "num_base_bdevs_discovered": 1, 00:21:56.455 "num_base_bdevs_operational": 4, 00:21:56.455 "base_bdevs_list": [ 00:21:56.455 { 00:21:56.455 "name": "pt1", 00:21:56.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.455 "is_configured": true, 00:21:56.455 "data_offset": 2048, 00:21:56.455 "data_size": 63488 00:21:56.455 }, 00:21:56.455 { 00:21:56.455 "name": null, 00:21:56.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.455 "is_configured": false, 00:21:56.455 "data_offset": 2048, 00:21:56.455 "data_size": 63488 00:21:56.455 }, 00:21:56.455 { 00:21:56.455 "name": null, 00:21:56.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:56.455 "is_configured": false, 00:21:56.455 "data_offset": 2048, 00:21:56.455 "data_size": 63488 00:21:56.455 }, 00:21:56.455 { 00:21:56.455 "name": null, 00:21:56.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:56.455 "is_configured": false, 00:21:56.455 "data_offset": 2048, 00:21:56.455 "data_size": 63488 00:21:56.455 } 00:21:56.455 ] 00:21:56.455 }' 00:21:56.455 15:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.455 15:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.713 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:56.713 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:56.713 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:56.971 [2024-07-23 15:15:52.309412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:56.971 [2024-07-23 15:15:52.309687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.971 [2024-07-23 15:15:52.309719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:21:56.971 [2024-07-23 15:15:52.309755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.971 [2024-07-23 15:15:52.310230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.971 [2024-07-23 15:15:52.310254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:56.971 [2024-07-23 15:15:52.310323] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:56.971 [2024-07-23 15:15:52.310348] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:56.971 pt2 00:21:56.971 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:56.971 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:56.971 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:57.229 [2024-07-23 15:15:52.493460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:57.229 [2024-07-23 15:15:52.493542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.229 [2024-07-23 15:15:52.493565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:21:57.229 [2024-07-23 15:15:52.493579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.229 [2024-07-23 15:15:52.494031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.229 [2024-07-23 15:15:52.494056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:57.229 [2024-07-23 15:15:52.494227] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:57.230 [2024-07-23 15:15:52.494256] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:57.230 pt3 00:21:57.230 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:57.230 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:57.230 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:57.488 [2024-07-23 15:15:52.741495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:57.488 [2024-07-23 15:15:52.741565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.488 [2024-07-23 15:15:52.741590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:21:57.488 [2024-07-23 15:15:52.741609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.488 [2024-07-23 15:15:52.742031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.488 [2024-07-23 15:15:52.742056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:57.488 [2024-07-23 15:15:52.742126] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:57.488 [2024-07-23 15:15:52.742151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:57.488 [2024-07-23 15:15:52.742269] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:21:57.488 [2024-07-23 15:15:52.742282] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:57.488 [2024-07-23 15:15:52.742345] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:21:57.488 [2024-07-23 15:15:52.742641] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:21:57.488 [2024-07-23 15:15:52.742652] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:21:57.488 [2024-07-23 15:15:52.742747] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.488 pt4 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.488 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.746 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.746 "name": "raid_bdev1", 00:21:57.746 "uuid": "c0915ed3-636d-4607-b2f9-5ebf4ef4697d", 00:21:57.746 "strip_size_kb": 64, 00:21:57.746 "state": "online", 00:21:57.746 "raid_level": "concat", 00:21:57.746 "superblock": true, 00:21:57.746 "num_base_bdevs": 4, 00:21:57.746 "num_base_bdevs_discovered": 4, 00:21:57.746 "num_base_bdevs_operational": 4, 00:21:57.746 "base_bdevs_list": [ 00:21:57.746 { 00:21:57.746 "name": "pt1", 00:21:57.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:57.746 "is_configured": true, 00:21:57.746 "data_offset": 2048, 00:21:57.746 "data_size": 63488 00:21:57.746 }, 00:21:57.746 { 00:21:57.746 "name": "pt2", 00:21:57.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.746 "is_configured": true, 00:21:57.746 "data_offset": 2048, 00:21:57.746 "data_size": 63488 00:21:57.746 }, 00:21:57.746 { 00:21:57.746 "name": "pt3", 00:21:57.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:57.746 "is_configured": true, 00:21:57.746 "data_offset": 2048, 00:21:57.746 "data_size": 63488 00:21:57.746 }, 00:21:57.746 { 00:21:57.746 "name": "pt4", 00:21:57.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:57.746 "is_configured": true, 00:21:57.746 "data_offset": 2048, 00:21:57.746 "data_size": 63488 00:21:57.746 } 00:21:57.746 ] 00:21:57.746 }' 00:21:57.746 15:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.747 15:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.005 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:58.005 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:58.005 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:58.005 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:58.005 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:58.005 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:58.005 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:58.005 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:58.262 [2024-07-23 15:15:53.517985] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:58.262 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:58.262 "name": "raid_bdev1", 00:21:58.262 "aliases": [ 00:21:58.262 "c0915ed3-636d-4607-b2f9-5ebf4ef4697d" 00:21:58.262 ], 00:21:58.262 "product_name": "Raid Volume", 00:21:58.262 "block_size": 512, 00:21:58.262 "num_blocks": 253952, 00:21:58.262 "uuid": "c0915ed3-636d-4607-b2f9-5ebf4ef4697d", 00:21:58.262 "assigned_rate_limits": { 00:21:58.262 "rw_ios_per_sec": 0, 00:21:58.262 "rw_mbytes_per_sec": 0, 00:21:58.262 "r_mbytes_per_sec": 0, 00:21:58.262 "w_mbytes_per_sec": 0 00:21:58.262 }, 00:21:58.262 "claimed": false, 00:21:58.262 "zoned": false, 00:21:58.262 "supported_io_types": { 00:21:58.262 "read": true, 00:21:58.262 "write": true, 00:21:58.262 "unmap": true, 00:21:58.262 "flush": true, 00:21:58.262 "reset": true, 00:21:58.262 "nvme_admin": false, 00:21:58.262 "nvme_io": false, 00:21:58.262 "nvme_io_md": false, 00:21:58.262 "write_zeroes": true, 00:21:58.262 "zcopy": false, 00:21:58.262 "get_zone_info": false, 00:21:58.262 "zone_management": false, 00:21:58.262 "zone_append": false, 00:21:58.262 "compare": false, 00:21:58.262 "compare_and_write": false, 00:21:58.262 "abort": false, 00:21:58.262 "seek_hole": false, 00:21:58.262 "seek_data": false, 00:21:58.262 "copy": false, 00:21:58.262 "nvme_iov_md": false 00:21:58.262 }, 00:21:58.262 "memory_domains": [ 00:21:58.262 { 00:21:58.262 "dma_device_id": "system", 00:21:58.262 "dma_device_type": 1 00:21:58.262 }, 00:21:58.263 { 00:21:58.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.263 "dma_device_type": 2 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "dma_device_id": "system", 00:21:58.263 "dma_device_type": 1 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.263 "dma_device_type": 2 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "dma_device_id": "system", 00:21:58.263 "dma_device_type": 1 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.263 "dma_device_type": 2 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "dma_device_id": "system", 00:21:58.263 "dma_device_type": 1 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.263 "dma_device_type": 2 00:21:58.263 } 00:21:58.263 ], 00:21:58.263 "driver_specific": { 00:21:58.263 "raid": { 00:21:58.263 "uuid": "c0915ed3-636d-4607-b2f9-5ebf4ef4697d", 00:21:58.263 "strip_size_kb": 64, 00:21:58.263 "state": "online", 00:21:58.263 "raid_level": "concat", 00:21:58.263 "superblock": true, 00:21:58.263 "num_base_bdevs": 4, 00:21:58.263 "num_base_bdevs_discovered": 4, 00:21:58.263 "num_base_bdevs_operational": 4, 00:21:58.263 "base_bdevs_list": [ 00:21:58.263 { 00:21:58.263 "name": "pt1", 00:21:58.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:58.263 "is_configured": true, 00:21:58.263 "data_offset": 2048, 00:21:58.263 "data_size": 63488 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "name": "pt2", 00:21:58.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:58.263 "is_configured": true, 00:21:58.263 "data_offset": 2048, 00:21:58.263 "data_size": 63488 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "name": "pt3", 00:21:58.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:58.263 "is_configured": true, 00:21:58.263 "data_offset": 2048, 00:21:58.263 "data_size": 63488 00:21:58.263 }, 00:21:58.263 { 00:21:58.263 "name": "pt4", 00:21:58.263 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:58.263 "is_configured": true, 00:21:58.263 "data_offset": 2048, 00:21:58.263 "data_size": 63488 00:21:58.263 } 00:21:58.263 ] 00:21:58.263 } 00:21:58.263 } 00:21:58.263 }' 00:21:58.263 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:58.263 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:58.263 pt2 00:21:58.263 pt3 00:21:58.263 pt4' 00:21:58.263 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.263 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:58.263 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.521 "name": "pt1", 00:21:58.521 "aliases": [ 00:21:58.521 "00000000-0000-0000-0000-000000000001" 00:21:58.521 ], 00:21:58.521 "product_name": "passthru", 00:21:58.521 "block_size": 512, 00:21:58.521 "num_blocks": 65536, 00:21:58.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:58.521 "assigned_rate_limits": { 00:21:58.521 "rw_ios_per_sec": 0, 00:21:58.521 "rw_mbytes_per_sec": 0, 00:21:58.521 "r_mbytes_per_sec": 0, 00:21:58.521 "w_mbytes_per_sec": 0 00:21:58.521 }, 00:21:58.521 "claimed": true, 00:21:58.521 "claim_type": "exclusive_write", 00:21:58.521 "zoned": false, 00:21:58.521 "supported_io_types": { 00:21:58.521 "read": true, 00:21:58.521 "write": true, 00:21:58.521 "unmap": true, 00:21:58.521 "flush": true, 00:21:58.521 "reset": true, 00:21:58.521 "nvme_admin": false, 00:21:58.521 "nvme_io": false, 00:21:58.521 "nvme_io_md": false, 00:21:58.521 "write_zeroes": true, 00:21:58.521 "zcopy": true, 00:21:58.521 "get_zone_info": false, 00:21:58.521 "zone_management": false, 00:21:58.521 "zone_append": false, 00:21:58.521 "compare": false, 00:21:58.521 "compare_and_write": false, 00:21:58.521 "abort": true, 00:21:58.521 "seek_hole": false, 00:21:58.521 "seek_data": false, 00:21:58.521 "copy": true, 00:21:58.521 "nvme_iov_md": false 00:21:58.521 }, 00:21:58.521 "memory_domains": [ 00:21:58.521 { 00:21:58.521 "dma_device_id": "system", 00:21:58.521 "dma_device_type": 1 00:21:58.521 }, 00:21:58.521 { 00:21:58.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.521 "dma_device_type": 2 00:21:58.521 } 00:21:58.521 ], 00:21:58.521 "driver_specific": { 00:21:58.521 "passthru": { 00:21:58.521 "name": "pt1", 00:21:58.521 "base_bdev_name": "malloc1" 00:21:58.521 } 00:21:58.521 } 00:21:58.521 }' 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.521 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:58.779 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.779 "name": "pt2", 00:21:58.779 "aliases": [ 00:21:58.779 "00000000-0000-0000-0000-000000000002" 00:21:58.779 ], 00:21:58.779 "product_name": "passthru", 00:21:58.779 "block_size": 512, 00:21:58.779 "num_blocks": 65536, 00:21:58.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:58.779 "assigned_rate_limits": { 00:21:58.779 "rw_ios_per_sec": 0, 00:21:58.779 "rw_mbytes_per_sec": 0, 00:21:58.779 "r_mbytes_per_sec": 0, 00:21:58.779 "w_mbytes_per_sec": 0 00:21:58.779 }, 00:21:58.779 "claimed": true, 00:21:58.779 "claim_type": "exclusive_write", 00:21:58.779 "zoned": false, 00:21:58.779 "supported_io_types": { 00:21:58.779 "read": true, 00:21:58.779 "write": true, 00:21:58.779 "unmap": true, 00:21:58.779 "flush": true, 00:21:58.779 "reset": true, 00:21:58.779 "nvme_admin": false, 00:21:58.779 "nvme_io": false, 00:21:58.779 "nvme_io_md": false, 00:21:58.779 "write_zeroes": true, 00:21:58.779 "zcopy": true, 00:21:58.779 "get_zone_info": false, 00:21:58.779 "zone_management": false, 00:21:58.779 "zone_append": false, 00:21:58.779 "compare": false, 00:21:58.779 "compare_and_write": false, 00:21:58.779 "abort": true, 00:21:58.779 "seek_hole": false, 00:21:58.779 "seek_data": false, 00:21:58.779 "copy": true, 00:21:58.779 "nvme_iov_md": false 00:21:58.779 }, 00:21:58.780 "memory_domains": [ 00:21:58.780 { 00:21:58.780 "dma_device_id": "system", 00:21:58.780 "dma_device_type": 1 00:21:58.780 }, 00:21:58.780 { 00:21:58.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.780 "dma_device_type": 2 00:21:58.780 } 00:21:58.780 ], 00:21:58.780 "driver_specific": { 00:21:58.780 "passthru": { 00:21:58.780 "name": "pt2", 00:21:58.780 "base_bdev_name": "malloc2" 00:21:58.780 } 00:21:58.780 } 00:21:58.780 }' 00:21:58.780 15:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:58.780 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:59.038 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:59.038 "name": "pt3", 00:21:59.038 "aliases": [ 00:21:59.038 "00000000-0000-0000-0000-000000000003" 00:21:59.038 ], 00:21:59.038 "product_name": "passthru", 00:21:59.038 "block_size": 512, 00:21:59.038 "num_blocks": 65536, 00:21:59.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:59.038 "assigned_rate_limits": { 00:21:59.038 "rw_ios_per_sec": 0, 00:21:59.038 "rw_mbytes_per_sec": 0, 00:21:59.038 "r_mbytes_per_sec": 0, 00:21:59.038 "w_mbytes_per_sec": 0 00:21:59.038 }, 00:21:59.038 "claimed": true, 00:21:59.038 "claim_type": "exclusive_write", 00:21:59.038 "zoned": false, 00:21:59.038 "supported_io_types": { 00:21:59.038 "read": true, 00:21:59.038 "write": true, 00:21:59.038 "unmap": true, 00:21:59.038 "flush": true, 00:21:59.038 "reset": true, 00:21:59.038 "nvme_admin": false, 00:21:59.038 "nvme_io": false, 00:21:59.038 "nvme_io_md": false, 00:21:59.038 "write_zeroes": true, 00:21:59.038 "zcopy": true, 00:21:59.038 "get_zone_info": false, 00:21:59.038 "zone_management": false, 00:21:59.038 "zone_append": false, 00:21:59.038 "compare": false, 00:21:59.038 "compare_and_write": false, 00:21:59.038 "abort": true, 00:21:59.038 "seek_hole": false, 00:21:59.038 "seek_data": false, 00:21:59.038 "copy": true, 00:21:59.038 "nvme_iov_md": false 00:21:59.038 }, 00:21:59.039 "memory_domains": [ 00:21:59.039 { 00:21:59.039 "dma_device_id": "system", 00:21:59.039 "dma_device_type": 1 00:21:59.039 }, 00:21:59.039 { 00:21:59.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.039 "dma_device_type": 2 00:21:59.039 } 00:21:59.039 ], 00:21:59.039 "driver_specific": { 00:21:59.039 "passthru": { 00:21:59.039 "name": "pt3", 00:21:59.039 "base_bdev_name": "malloc3" 00:21:59.039 } 00:21:59.039 } 00:21:59.039 }' 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:59.039 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:59.297 "name": "pt4", 00:21:59.297 "aliases": [ 00:21:59.297 "00000000-0000-0000-0000-000000000004" 00:21:59.297 ], 00:21:59.297 "product_name": "passthru", 00:21:59.297 "block_size": 512, 00:21:59.297 "num_blocks": 65536, 00:21:59.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:59.297 "assigned_rate_limits": { 00:21:59.297 "rw_ios_per_sec": 0, 00:21:59.297 "rw_mbytes_per_sec": 0, 00:21:59.297 "r_mbytes_per_sec": 0, 00:21:59.297 "w_mbytes_per_sec": 0 00:21:59.297 }, 00:21:59.297 "claimed": true, 00:21:59.297 "claim_type": "exclusive_write", 00:21:59.297 "zoned": false, 00:21:59.297 "supported_io_types": { 00:21:59.297 "read": true, 00:21:59.297 "write": true, 00:21:59.297 "unmap": true, 00:21:59.297 "flush": true, 00:21:59.297 "reset": true, 00:21:59.297 "nvme_admin": false, 00:21:59.297 "nvme_io": false, 00:21:59.297 "nvme_io_md": false, 00:21:59.297 "write_zeroes": true, 00:21:59.297 "zcopy": true, 00:21:59.297 "get_zone_info": false, 00:21:59.297 "zone_management": false, 00:21:59.297 "zone_append": false, 00:21:59.297 "compare": false, 00:21:59.297 "compare_and_write": false, 00:21:59.297 "abort": true, 00:21:59.297 "seek_hole": false, 00:21:59.297 "seek_data": false, 00:21:59.297 "copy": true, 00:21:59.297 "nvme_iov_md": false 00:21:59.297 }, 00:21:59.297 "memory_domains": [ 00:21:59.297 { 00:21:59.297 "dma_device_id": "system", 00:21:59.297 "dma_device_type": 1 00:21:59.297 }, 00:21:59.297 { 00:21:59.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.297 "dma_device_type": 2 00:21:59.297 } 00:21:59.297 ], 00:21:59.297 "driver_specific": { 00:21:59.297 "passthru": { 00:21:59.297 "name": "pt4", 00:21:59.297 "base_bdev_name": "malloc4" 00:21:59.297 } 00:21:59.297 } 00:21:59.297 }' 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.297 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.555 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.556 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:59.556 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:59.556 [2024-07-23 15:15:54.970291] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:59.814 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c0915ed3-636d-4607-b2f9-5ebf4ef4697d '!=' c0915ed3-636d-4607-b2f9-5ebf4ef4697d ']' 00:21:59.814 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:21:59.814 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:59.814 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:59.814 15:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 103537 00:21:59.815 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 103537 ']' 00:21:59.815 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 103537 00:21:59.815 15:15:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:59.815 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.815 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103537 00:21:59.815 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:59.815 killing process with pid 103537 00:21:59.815 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:59.815 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103537' 00:21:59.815 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 103537 00:21:59.815 [2024-07-23 15:15:55.028834] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:59.815 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 103537 00:21:59.815 [2024-07-23 15:15:55.028968] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.815 [2024-07-23 15:15:55.029061] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.815 [2024-07-23 15:15:55.029079] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:21:59.815 [2024-07-23 15:15:55.076183] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:00.073 15:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:00.073 00:22:00.073 real 0m12.063s 00:22:00.073 user 0m20.541s 00:22:00.073 sys 0m2.669s 00:22:00.073 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.073 15:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.073 ************************************ 00:22:00.073 END TEST raid_superblock_test 00:22:00.073 ************************************ 00:22:00.073 15:15:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:00.073 15:15:55 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:22:00.073 15:15:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:00.073 15:15:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.073 15:15:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:00.073 ************************************ 00:22:00.073 START TEST raid_read_error_test 00:22:00.073 ************************************ 00:22:00.073 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.s1TyT8tX8u 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=104004 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 104004 /var/tmp/spdk-raid.sock 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 104004 ']' 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:00.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.074 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.074 [2024-07-23 15:15:55.458545] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:22:00.074 [2024-07-23 15:15:55.458696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104004 ] 00:22:00.332 [2024-07-23 15:15:55.598329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.332 [2024-07-23 15:15:55.644817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.332 [2024-07-23 15:15:55.689146] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.332 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.332 15:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:00.332 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:00.332 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:00.590 BaseBdev1_malloc 00:22:00.591 15:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:00.849 true 00:22:00.849 15:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:01.120 [2024-07-23 15:15:56.408055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:01.120 [2024-07-23 15:15:56.408137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.120 [2024-07-23 15:15:56.408178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:22:01.120 [2024-07-23 15:15:56.408190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.120 [2024-07-23 15:15:56.410743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.120 [2024-07-23 15:15:56.410802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:01.120 BaseBdev1 00:22:01.120 15:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:01.120 15:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:01.384 BaseBdev2_malloc 00:22:01.384 15:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:01.384 true 00:22:01.384 15:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:01.642 [2024-07-23 15:15:56.945707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:01.642 [2024-07-23 15:15:56.945816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.642 [2024-07-23 15:15:56.945849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:22:01.642 [2024-07-23 15:15:56.945863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.642 [2024-07-23 15:15:56.948419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.642 [2024-07-23 15:15:56.948461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:01.642 BaseBdev2 00:22:01.642 15:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:01.642 15:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:01.900 BaseBdev3_malloc 00:22:01.900 15:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:02.159 true 00:22:02.159 15:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:02.417 [2024-07-23 15:15:57.606073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:02.417 [2024-07-23 15:15:57.606144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.417 [2024-07-23 15:15:57.606174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:22:02.417 [2024-07-23 15:15:57.606186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.417 [2024-07-23 15:15:57.608628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.417 [2024-07-23 15:15:57.608669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:02.417 BaseBdev3 00:22:02.417 15:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:02.417 15:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:02.417 BaseBdev4_malloc 00:22:02.417 15:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:02.675 true 00:22:02.675 15:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:02.933 [2024-07-23 15:15:58.123697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:02.933 [2024-07-23 15:15:58.123769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.933 [2024-07-23 15:15:58.123814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:22:02.933 [2024-07-23 15:15:58.123827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.933 [2024-07-23 15:15:58.126255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.933 [2024-07-23 15:15:58.126298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:02.933 BaseBdev4 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:02.933 [2024-07-23 15:15:58.291836] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.933 [2024-07-23 15:15:58.294158] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.933 [2024-07-23 15:15:58.294273] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.933 [2024-07-23 15:15:58.294335] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:02.933 [2024-07-23 15:15:58.294589] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:22:02.933 [2024-07-23 15:15:58.294603] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:02.933 [2024-07-23 15:15:58.294759] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:22:02.933 [2024-07-23 15:15:58.295135] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:22:02.933 [2024-07-23 15:15:58.295153] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:22:02.933 [2024-07-23 15:15:58.295288] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.933 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.934 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.192 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.192 "name": "raid_bdev1", 00:22:03.192 "uuid": "825d9eb0-3dd9-4571-be16-fa1b56d78723", 00:22:03.192 "strip_size_kb": 64, 00:22:03.192 "state": "online", 00:22:03.192 "raid_level": "concat", 00:22:03.192 "superblock": true, 00:22:03.192 "num_base_bdevs": 4, 00:22:03.192 "num_base_bdevs_discovered": 4, 00:22:03.192 "num_base_bdevs_operational": 4, 00:22:03.192 "base_bdevs_list": [ 00:22:03.192 { 00:22:03.192 "name": "BaseBdev1", 00:22:03.192 "uuid": "0dced9e8-3d85-50a2-a342-a23e1a5b02ab", 00:22:03.192 "is_configured": true, 00:22:03.192 "data_offset": 2048, 00:22:03.192 "data_size": 63488 00:22:03.192 }, 00:22:03.192 { 00:22:03.192 "name": "BaseBdev2", 00:22:03.192 "uuid": "aa63d9b4-1a08-5ba1-8933-434513766fb9", 00:22:03.192 "is_configured": true, 00:22:03.192 "data_offset": 2048, 00:22:03.192 "data_size": 63488 00:22:03.192 }, 00:22:03.192 { 00:22:03.192 "name": "BaseBdev3", 00:22:03.192 "uuid": "f978d4fd-0834-5889-b558-1c2dbe886d27", 00:22:03.192 "is_configured": true, 00:22:03.192 "data_offset": 2048, 00:22:03.192 "data_size": 63488 00:22:03.192 }, 00:22:03.192 { 00:22:03.192 "name": "BaseBdev4", 00:22:03.192 "uuid": "522423b9-3aa7-5c75-ad7a-caacea083ab7", 00:22:03.192 "is_configured": true, 00:22:03.192 "data_offset": 2048, 00:22:03.192 "data_size": 63488 00:22:03.192 } 00:22:03.192 ] 00:22:03.192 }' 00:22:03.192 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.192 15:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.451 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:03.451 15:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:03.451 [2024-07-23 15:15:58.860290] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:22:04.386 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.644 15:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.902 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.902 "name": "raid_bdev1", 00:22:04.902 "uuid": "825d9eb0-3dd9-4571-be16-fa1b56d78723", 00:22:04.902 "strip_size_kb": 64, 00:22:04.902 "state": "online", 00:22:04.902 "raid_level": "concat", 00:22:04.902 "superblock": true, 00:22:04.902 "num_base_bdevs": 4, 00:22:04.902 "num_base_bdevs_discovered": 4, 00:22:04.902 "num_base_bdevs_operational": 4, 00:22:04.902 "base_bdevs_list": [ 00:22:04.902 { 00:22:04.902 "name": "BaseBdev1", 00:22:04.902 "uuid": "0dced9e8-3d85-50a2-a342-a23e1a5b02ab", 00:22:04.902 "is_configured": true, 00:22:04.902 "data_offset": 2048, 00:22:04.902 "data_size": 63488 00:22:04.903 }, 00:22:04.903 { 00:22:04.903 "name": "BaseBdev2", 00:22:04.903 "uuid": "aa63d9b4-1a08-5ba1-8933-434513766fb9", 00:22:04.903 "is_configured": true, 00:22:04.903 "data_offset": 2048, 00:22:04.903 "data_size": 63488 00:22:04.903 }, 00:22:04.903 { 00:22:04.903 "name": "BaseBdev3", 00:22:04.903 "uuid": "f978d4fd-0834-5889-b558-1c2dbe886d27", 00:22:04.903 "is_configured": true, 00:22:04.903 "data_offset": 2048, 00:22:04.903 "data_size": 63488 00:22:04.903 }, 00:22:04.903 { 00:22:04.903 "name": "BaseBdev4", 00:22:04.903 "uuid": "522423b9-3aa7-5c75-ad7a-caacea083ab7", 00:22:04.903 "is_configured": true, 00:22:04.903 "data_offset": 2048, 00:22:04.903 "data_size": 63488 00:22:04.903 } 00:22:04.903 ] 00:22:04.903 }' 00:22:04.903 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.903 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.161 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:05.440 [2024-07-23 15:16:00.662048] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:05.440 [2024-07-23 15:16:00.662269] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:05.440 [2024-07-23 15:16:00.664755] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.440 [2024-07-23 15:16:00.664940] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.440 [2024-07-23 15:16:00.665074] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr0 00:22:05.440 ee all in destruct 00:22:05.440 [2024-07-23 15:16:00.665190] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 104004 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 104004 ']' 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 104004 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104004 00:22:05.440 killing process with pid 104004 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104004' 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 104004 00:22:05.440 [2024-07-23 15:16:00.730668] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.440 15:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 104004 00:22:05.440 [2024-07-23 15:16:00.765740] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:05.740 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.s1TyT8tX8u 00:22:05.740 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:05.740 15:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:05.740 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:22:05.740 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:22:05.740 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:05.740 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:05.740 15:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:22:05.740 00:22:05.740 real 0m5.615s 00:22:05.740 user 0m8.766s 00:22:05.740 sys 0m1.066s 00:22:05.740 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.740 ************************************ 00:22:05.740 END TEST raid_read_error_test 00:22:05.740 ************************************ 00:22:05.740 15:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.740 15:16:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:05.740 15:16:01 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:22:05.740 15:16:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:05.740 15:16:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.740 15:16:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:05.740 ************************************ 00:22:05.740 START TEST raid_write_error_test 00:22:05.740 ************************************ 00:22:05.740 15:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:22:05.740 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:22:05.740 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:05.740 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:05.740 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:05.740 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:05.740 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:22:05.740 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.JJ7m1LlXxb 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=104174 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 104174 /var/tmp/spdk-raid.sock 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 104174 ']' 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:05.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.741 15:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.741 [2024-07-23 15:16:01.139536] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:22:05.741 [2024-07-23 15:16:01.139692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104174 ] 00:22:05.999 [2024-07-23 15:16:01.278643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.999 [2024-07-23 15:16:01.325038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.999 [2024-07-23 15:16:01.370014] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.934 15:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.934 15:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:06.935 15:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:06.935 15:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:06.935 BaseBdev1_malloc 00:22:06.935 15:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:07.193 true 00:22:07.193 15:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:07.451 [2024-07-23 15:16:02.653302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:07.451 [2024-07-23 15:16:02.653379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.451 [2024-07-23 15:16:02.653412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:22:07.451 [2024-07-23 15:16:02.653426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.451 [2024-07-23 15:16:02.656184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.451 [2024-07-23 15:16:02.656225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:07.451 BaseBdev1 00:22:07.451 15:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:07.451 15:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:07.451 BaseBdev2_malloc 00:22:07.451 15:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:07.708 true 00:22:07.708 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:07.965 [2024-07-23 15:16:03.190683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:07.965 [2024-07-23 15:16:03.190754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.965 [2024-07-23 15:16:03.190783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:22:07.965 [2024-07-23 15:16:03.190813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.965 [2024-07-23 15:16:03.193287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.965 [2024-07-23 15:16:03.193324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:07.965 BaseBdev2 00:22:07.965 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:07.965 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:07.965 BaseBdev3_malloc 00:22:07.965 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:08.223 true 00:22:08.223 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:08.481 [2024-07-23 15:16:03.711854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:08.481 [2024-07-23 15:16:03.711925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.481 [2024-07-23 15:16:03.711954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:22:08.481 [2024-07-23 15:16:03.711967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.481 [2024-07-23 15:16:03.714551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.481 [2024-07-23 15:16:03.714589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:08.481 BaseBdev3 00:22:08.481 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:08.481 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:08.481 BaseBdev4_malloc 00:22:08.739 15:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:08.739 true 00:22:08.739 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:08.997 [2024-07-23 15:16:04.253277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:08.997 [2024-07-23 15:16:04.253348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.997 [2024-07-23 15:16:04.253380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:22:08.997 [2024-07-23 15:16:04.253392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.997 [2024-07-23 15:16:04.255837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.997 [2024-07-23 15:16:04.255874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:08.997 BaseBdev4 00:22:08.997 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:09.255 [2024-07-23 15:16:04.429396] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.255 [2024-07-23 15:16:04.431719] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.255 [2024-07-23 15:16:04.431828] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.255 [2024-07-23 15:16:04.431887] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:09.255 [2024-07-23 15:16:04.432138] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:22:09.255 [2024-07-23 15:16:04.432169] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:09.255 [2024-07-23 15:16:04.432310] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:22:09.255 [2024-07-23 15:16:04.432668] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:22:09.255 [2024-07-23 15:16:04.432708] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:22:09.255 [2024-07-23 15:16:04.432842] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.255 "name": "raid_bdev1", 00:22:09.255 "uuid": "b2b4e55e-213d-4d2b-a6a3-7e14b4fcbc20", 00:22:09.255 "strip_size_kb": 64, 00:22:09.255 "state": "online", 00:22:09.255 "raid_level": "concat", 00:22:09.255 "superblock": true, 00:22:09.255 "num_base_bdevs": 4, 00:22:09.255 "num_base_bdevs_discovered": 4, 00:22:09.255 "num_base_bdevs_operational": 4, 00:22:09.255 "base_bdevs_list": [ 00:22:09.255 { 00:22:09.255 "name": "BaseBdev1", 00:22:09.255 "uuid": "21585dff-94b3-5f41-91e9-0855dafde740", 00:22:09.255 "is_configured": true, 00:22:09.255 "data_offset": 2048, 00:22:09.255 "data_size": 63488 00:22:09.255 }, 00:22:09.255 { 00:22:09.255 "name": "BaseBdev2", 00:22:09.255 "uuid": "3a88aace-4107-50a5-915e-c076f4bcb6f6", 00:22:09.255 "is_configured": true, 00:22:09.255 "data_offset": 2048, 00:22:09.255 "data_size": 63488 00:22:09.255 }, 00:22:09.255 { 00:22:09.255 "name": "BaseBdev3", 00:22:09.255 "uuid": "d3158940-7b6c-52a4-add2-5bc18aea5e05", 00:22:09.255 "is_configured": true, 00:22:09.255 "data_offset": 2048, 00:22:09.255 "data_size": 63488 00:22:09.255 }, 00:22:09.255 { 00:22:09.255 "name": "BaseBdev4", 00:22:09.255 "uuid": "b8492286-4aa5-5248-af03-991e70102c27", 00:22:09.255 "is_configured": true, 00:22:09.255 "data_offset": 2048, 00:22:09.255 "data_size": 63488 00:22:09.255 } 00:22:09.255 ] 00:22:09.255 }' 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.255 15:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.514 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:09.514 15:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:09.772 [2024-07-23 15:16:05.017947] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:22:10.708 15:16:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.968 "name": "raid_bdev1", 00:22:10.968 "uuid": "b2b4e55e-213d-4d2b-a6a3-7e14b4fcbc20", 00:22:10.968 "strip_size_kb": 64, 00:22:10.968 "state": "online", 00:22:10.968 "raid_level": "concat", 00:22:10.968 "superblock": true, 00:22:10.968 "num_base_bdevs": 4, 00:22:10.968 "num_base_bdevs_discovered": 4, 00:22:10.968 "num_base_bdevs_operational": 4, 00:22:10.968 "base_bdevs_list": [ 00:22:10.968 { 00:22:10.968 "name": "BaseBdev1", 00:22:10.968 "uuid": "21585dff-94b3-5f41-91e9-0855dafde740", 00:22:10.968 "is_configured": true, 00:22:10.968 "data_offset": 2048, 00:22:10.968 "data_size": 63488 00:22:10.968 }, 00:22:10.968 { 00:22:10.968 "name": "BaseBdev2", 00:22:10.968 "uuid": "3a88aace-4107-50a5-915e-c076f4bcb6f6", 00:22:10.968 "is_configured": true, 00:22:10.968 "data_offset": 2048, 00:22:10.968 "data_size": 63488 00:22:10.968 }, 00:22:10.968 { 00:22:10.968 "name": "BaseBdev3", 00:22:10.968 "uuid": "d3158940-7b6c-52a4-add2-5bc18aea5e05", 00:22:10.968 "is_configured": true, 00:22:10.968 "data_offset": 2048, 00:22:10.968 "data_size": 63488 00:22:10.968 }, 00:22:10.968 { 00:22:10.968 "name": "BaseBdev4", 00:22:10.968 "uuid": "b8492286-4aa5-5248-af03-991e70102c27", 00:22:10.968 "is_configured": true, 00:22:10.968 "data_offset": 2048, 00:22:10.968 "data_size": 63488 00:22:10.968 } 00:22:10.968 ] 00:22:10.968 }' 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.968 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:11.560 [2024-07-23 15:16:06.904069] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:11.560 [2024-07-23 15:16:06.904122] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:11.560 [2024-07-23 15:16:06.906608] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.560 [2024-07-23 15:16:06.906673] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.560 [2024-07-23 15:16:06.906721] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.560 [2024-07-23 15:16:06.906733] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:22:11.560 0 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 104174 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 104174 ']' 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 104174 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104174 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:11.560 killing process with pid 104174 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104174' 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 104174 00:22:11.560 [2024-07-23 15:16:06.962690] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:11.560 15:16:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 104174 00:22:11.818 [2024-07-23 15:16:06.998990] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:11.818 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.JJ7m1LlXxb 00:22:11.818 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:11.818 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:11.818 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:22:11.819 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:22:11.819 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:11.819 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:11.819 15:16:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:22:11.819 00:22:11.819 real 0m6.168s 00:22:11.819 user 0m9.426s 00:22:11.819 sys 0m1.081s 00:22:11.819 15:16:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.819 15:16:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.819 ************************************ 00:22:11.819 END TEST raid_write_error_test 00:22:11.819 ************************************ 00:22:12.077 15:16:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:12.077 15:16:07 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:12.077 15:16:07 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:22:12.077 15:16:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:12.077 15:16:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.077 15:16:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 ************************************ 00:22:12.077 START TEST raid_state_function_test 00:22:12.077 ************************************ 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:12.077 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=104348 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:12.078 Process raid pid: 104348 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 104348' 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 104348 /var/tmp/spdk-raid.sock 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 104348 ']' 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.078 15:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.078 [2024-07-23 15:16:07.376590] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:22:12.078 [2024-07-23 15:16:07.376764] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.337 [2024-07-23 15:16:07.527895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.337 [2024-07-23 15:16:07.571069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.337 [2024-07-23 15:16:07.615150] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.905 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.905 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:22:12.905 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:13.163 [2024-07-23 15:16:08.376456] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:13.163 [2024-07-23 15:16:08.376525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:13.163 [2024-07-23 15:16:08.376543] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:13.163 [2024-07-23 15:16:08.376558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:13.163 [2024-07-23 15:16:08.376571] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:13.163 [2024-07-23 15:16:08.376583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:13.163 [2024-07-23 15:16:08.376591] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:13.163 [2024-07-23 15:16:08.376606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.163 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.421 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.421 "name": "Existed_Raid", 00:22:13.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.421 "strip_size_kb": 0, 00:22:13.421 "state": "configuring", 00:22:13.421 "raid_level": "raid1", 00:22:13.421 "superblock": false, 00:22:13.421 "num_base_bdevs": 4, 00:22:13.421 "num_base_bdevs_discovered": 0, 00:22:13.421 "num_base_bdevs_operational": 4, 00:22:13.421 "base_bdevs_list": [ 00:22:13.421 { 00:22:13.421 "name": "BaseBdev1", 00:22:13.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.421 "is_configured": false, 00:22:13.421 "data_offset": 0, 00:22:13.422 "data_size": 0 00:22:13.422 }, 00:22:13.422 { 00:22:13.422 "name": "BaseBdev2", 00:22:13.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.422 "is_configured": false, 00:22:13.422 "data_offset": 0, 00:22:13.422 "data_size": 0 00:22:13.422 }, 00:22:13.422 { 00:22:13.422 "name": "BaseBdev3", 00:22:13.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.422 "is_configured": false, 00:22:13.422 "data_offset": 0, 00:22:13.422 "data_size": 0 00:22:13.422 }, 00:22:13.422 { 00:22:13.422 "name": "BaseBdev4", 00:22:13.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.422 "is_configured": false, 00:22:13.422 "data_offset": 0, 00:22:13.422 "data_size": 0 00:22:13.422 } 00:22:13.422 ] 00:22:13.422 }' 00:22:13.422 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.422 15:16:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.680 15:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:13.938 [2024-07-23 15:16:09.216500] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:13.938 [2024-07-23 15:16:09.216553] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:22:13.938 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:14.197 [2024-07-23 15:16:09.396568] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:14.197 [2024-07-23 15:16:09.396634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:14.197 [2024-07-23 15:16:09.396648] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:14.197 [2024-07-23 15:16:09.396661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:14.197 [2024-07-23 15:16:09.396669] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:14.197 [2024-07-23 15:16:09.396681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:14.197 [2024-07-23 15:16:09.396688] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:14.197 [2024-07-23 15:16:09.396702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:14.197 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:14.197 [2024-07-23 15:16:09.582099] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:14.197 BaseBdev1 00:22:14.197 15:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:14.197 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:14.197 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:14.197 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:14.197 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:14.197 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:14.197 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:14.457 15:16:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:14.715 [ 00:22:14.715 { 00:22:14.715 "name": "BaseBdev1", 00:22:14.715 "aliases": [ 00:22:14.715 "939764ed-0294-49d1-a047-2c32b2c7638f" 00:22:14.715 ], 00:22:14.715 "product_name": "Malloc disk", 00:22:14.715 "block_size": 512, 00:22:14.715 "num_blocks": 65536, 00:22:14.715 "uuid": "939764ed-0294-49d1-a047-2c32b2c7638f", 00:22:14.715 "assigned_rate_limits": { 00:22:14.715 "rw_ios_per_sec": 0, 00:22:14.715 "rw_mbytes_per_sec": 0, 00:22:14.715 "r_mbytes_per_sec": 0, 00:22:14.715 "w_mbytes_per_sec": 0 00:22:14.715 }, 00:22:14.715 "claimed": true, 00:22:14.715 "claim_type": "exclusive_write", 00:22:14.715 "zoned": false, 00:22:14.715 "supported_io_types": { 00:22:14.715 "read": true, 00:22:14.715 "write": true, 00:22:14.715 "unmap": true, 00:22:14.715 "flush": true, 00:22:14.715 "reset": true, 00:22:14.715 "nvme_admin": false, 00:22:14.715 "nvme_io": false, 00:22:14.715 "nvme_io_md": false, 00:22:14.715 "write_zeroes": true, 00:22:14.715 "zcopy": true, 00:22:14.715 "get_zone_info": false, 00:22:14.715 "zone_management": false, 00:22:14.715 "zone_append": false, 00:22:14.715 "compare": false, 00:22:14.715 "compare_and_write": false, 00:22:14.715 "abort": true, 00:22:14.715 "seek_hole": false, 00:22:14.715 "seek_data": false, 00:22:14.715 "copy": true, 00:22:14.715 "nvme_iov_md": false 00:22:14.715 }, 00:22:14.715 "memory_domains": [ 00:22:14.715 { 00:22:14.715 "dma_device_id": "system", 00:22:14.715 "dma_device_type": 1 00:22:14.715 }, 00:22:14.715 { 00:22:14.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.715 "dma_device_type": 2 00:22:14.715 } 00:22:14.715 ], 00:22:14.715 "driver_specific": {} 00:22:14.715 } 00:22:14.715 ] 00:22:14.715 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:14.715 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:14.715 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.715 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.716 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.974 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.974 "name": "Existed_Raid", 00:22:14.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.974 "strip_size_kb": 0, 00:22:14.974 "state": "configuring", 00:22:14.974 "raid_level": "raid1", 00:22:14.974 "superblock": false, 00:22:14.974 "num_base_bdevs": 4, 00:22:14.974 "num_base_bdevs_discovered": 1, 00:22:14.974 "num_base_bdevs_operational": 4, 00:22:14.974 "base_bdevs_list": [ 00:22:14.974 { 00:22:14.974 "name": "BaseBdev1", 00:22:14.974 "uuid": "939764ed-0294-49d1-a047-2c32b2c7638f", 00:22:14.974 "is_configured": true, 00:22:14.974 "data_offset": 0, 00:22:14.974 "data_size": 65536 00:22:14.974 }, 00:22:14.974 { 00:22:14.974 "name": "BaseBdev2", 00:22:14.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.974 "is_configured": false, 00:22:14.974 "data_offset": 0, 00:22:14.974 "data_size": 0 00:22:14.974 }, 00:22:14.974 { 00:22:14.974 "name": "BaseBdev3", 00:22:14.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.974 "is_configured": false, 00:22:14.974 "data_offset": 0, 00:22:14.974 "data_size": 0 00:22:14.974 }, 00:22:14.974 { 00:22:14.974 "name": "BaseBdev4", 00:22:14.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.974 "is_configured": false, 00:22:14.974 "data_offset": 0, 00:22:14.974 "data_size": 0 00:22:14.974 } 00:22:14.974 ] 00:22:14.974 }' 00:22:14.974 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.974 15:16:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.233 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:15.491 [2024-07-23 15:16:10.746470] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:15.491 [2024-07-23 15:16:10.746720] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:22:15.491 15:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:15.749 [2024-07-23 15:16:10.994579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:15.749 [2024-07-23 15:16:10.997002] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:15.749 [2024-07-23 15:16:10.997166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:15.749 [2024-07-23 15:16:10.997255] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:15.749 [2024-07-23 15:16:10.997302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:15.749 [2024-07-23 15:16:10.997329] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:15.749 [2024-07-23 15:16:10.997413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.749 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.007 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.007 "name": "Existed_Raid", 00:22:16.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.007 "strip_size_kb": 0, 00:22:16.007 "state": "configuring", 00:22:16.007 "raid_level": "raid1", 00:22:16.007 "superblock": false, 00:22:16.007 "num_base_bdevs": 4, 00:22:16.007 "num_base_bdevs_discovered": 1, 00:22:16.007 "num_base_bdevs_operational": 4, 00:22:16.007 "base_bdevs_list": [ 00:22:16.007 { 00:22:16.007 "name": "BaseBdev1", 00:22:16.007 "uuid": "939764ed-0294-49d1-a047-2c32b2c7638f", 00:22:16.007 "is_configured": true, 00:22:16.007 "data_offset": 0, 00:22:16.007 "data_size": 65536 00:22:16.007 }, 00:22:16.007 { 00:22:16.007 "name": "BaseBdev2", 00:22:16.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.007 "is_configured": false, 00:22:16.007 "data_offset": 0, 00:22:16.007 "data_size": 0 00:22:16.007 }, 00:22:16.007 { 00:22:16.007 "name": "BaseBdev3", 00:22:16.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.007 "is_configured": false, 00:22:16.007 "data_offset": 0, 00:22:16.007 "data_size": 0 00:22:16.007 }, 00:22:16.007 { 00:22:16.007 "name": "BaseBdev4", 00:22:16.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.007 "is_configured": false, 00:22:16.007 "data_offset": 0, 00:22:16.007 "data_size": 0 00:22:16.007 } 00:22:16.007 ] 00:22:16.007 }' 00:22:16.007 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.007 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.265 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:16.523 [2024-07-23 15:16:11.771823] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:16.523 BaseBdev2 00:22:16.523 15:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:16.523 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:16.523 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:16.523 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:16.523 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:16.523 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:16.523 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:16.804 15:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:16.804 [ 00:22:16.804 { 00:22:16.804 "name": "BaseBdev2", 00:22:16.804 "aliases": [ 00:22:16.804 "3dfeebab-cd3f-46f9-896e-59d5d136620b" 00:22:16.804 ], 00:22:16.804 "product_name": "Malloc disk", 00:22:16.804 "block_size": 512, 00:22:16.804 "num_blocks": 65536, 00:22:16.804 "uuid": "3dfeebab-cd3f-46f9-896e-59d5d136620b", 00:22:16.804 "assigned_rate_limits": { 00:22:16.804 "rw_ios_per_sec": 0, 00:22:16.804 "rw_mbytes_per_sec": 0, 00:22:16.804 "r_mbytes_per_sec": 0, 00:22:16.804 "w_mbytes_per_sec": 0 00:22:16.804 }, 00:22:16.804 "claimed": true, 00:22:16.804 "claim_type": "exclusive_write", 00:22:16.804 "zoned": false, 00:22:16.804 "supported_io_types": { 00:22:16.804 "read": true, 00:22:16.804 "write": true, 00:22:16.804 "unmap": true, 00:22:16.804 "flush": true, 00:22:16.804 "reset": true, 00:22:16.804 "nvme_admin": false, 00:22:16.804 "nvme_io": false, 00:22:16.804 "nvme_io_md": false, 00:22:16.804 "write_zeroes": true, 00:22:16.804 "zcopy": true, 00:22:16.804 "get_zone_info": false, 00:22:16.804 "zone_management": false, 00:22:16.804 "zone_append": false, 00:22:16.804 "compare": false, 00:22:16.804 "compare_and_write": false, 00:22:16.804 "abort": true, 00:22:16.804 "seek_hole": false, 00:22:16.804 "seek_data": false, 00:22:16.804 "copy": true, 00:22:16.804 "nvme_iov_md": false 00:22:16.804 }, 00:22:16.804 "memory_domains": [ 00:22:16.804 { 00:22:16.804 "dma_device_id": "system", 00:22:16.804 "dma_device_type": 1 00:22:16.804 }, 00:22:16.804 { 00:22:16.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.805 "dma_device_type": 2 00:22:16.805 } 00:22:16.805 ], 00:22:16.805 "driver_specific": {} 00:22:16.805 } 00:22:16.805 ] 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.805 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.090 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.090 "name": "Existed_Raid", 00:22:17.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.090 "strip_size_kb": 0, 00:22:17.090 "state": "configuring", 00:22:17.090 "raid_level": "raid1", 00:22:17.090 "superblock": false, 00:22:17.090 "num_base_bdevs": 4, 00:22:17.090 "num_base_bdevs_discovered": 2, 00:22:17.090 "num_base_bdevs_operational": 4, 00:22:17.090 "base_bdevs_list": [ 00:22:17.090 { 00:22:17.090 "name": "BaseBdev1", 00:22:17.090 "uuid": "939764ed-0294-49d1-a047-2c32b2c7638f", 00:22:17.090 "is_configured": true, 00:22:17.090 "data_offset": 0, 00:22:17.090 "data_size": 65536 00:22:17.090 }, 00:22:17.090 { 00:22:17.090 "name": "BaseBdev2", 00:22:17.090 "uuid": "3dfeebab-cd3f-46f9-896e-59d5d136620b", 00:22:17.090 "is_configured": true, 00:22:17.090 "data_offset": 0, 00:22:17.090 "data_size": 65536 00:22:17.090 }, 00:22:17.090 { 00:22:17.090 "name": "BaseBdev3", 00:22:17.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.090 "is_configured": false, 00:22:17.090 "data_offset": 0, 00:22:17.090 "data_size": 0 00:22:17.090 }, 00:22:17.090 { 00:22:17.090 "name": "BaseBdev4", 00:22:17.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.090 "is_configured": false, 00:22:17.090 "data_offset": 0, 00:22:17.090 "data_size": 0 00:22:17.090 } 00:22:17.090 ] 00:22:17.090 }' 00:22:17.090 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.090 15:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.349 15:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:17.608 [2024-07-23 15:16:12.995403] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:17.608 BaseBdev3 00:22:17.608 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:17.608 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:17.608 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:17.608 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:17.608 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:17.608 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:17.608 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:17.866 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:18.125 [ 00:22:18.125 { 00:22:18.125 "name": "BaseBdev3", 00:22:18.125 "aliases": [ 00:22:18.125 "56b89fc4-5d06-4bab-9629-26554217cd28" 00:22:18.125 ], 00:22:18.125 "product_name": "Malloc disk", 00:22:18.125 "block_size": 512, 00:22:18.125 "num_blocks": 65536, 00:22:18.125 "uuid": "56b89fc4-5d06-4bab-9629-26554217cd28", 00:22:18.125 "assigned_rate_limits": { 00:22:18.125 "rw_ios_per_sec": 0, 00:22:18.125 "rw_mbytes_per_sec": 0, 00:22:18.125 "r_mbytes_per_sec": 0, 00:22:18.125 "w_mbytes_per_sec": 0 00:22:18.125 }, 00:22:18.125 "claimed": true, 00:22:18.125 "claim_type": "exclusive_write", 00:22:18.125 "zoned": false, 00:22:18.125 "supported_io_types": { 00:22:18.125 "read": true, 00:22:18.125 "write": true, 00:22:18.125 "unmap": true, 00:22:18.125 "flush": true, 00:22:18.125 "reset": true, 00:22:18.125 "nvme_admin": false, 00:22:18.125 "nvme_io": false, 00:22:18.125 "nvme_io_md": false, 00:22:18.125 "write_zeroes": true, 00:22:18.125 "zcopy": true, 00:22:18.125 "get_zone_info": false, 00:22:18.125 "zone_management": false, 00:22:18.125 "zone_append": false, 00:22:18.125 "compare": false, 00:22:18.125 "compare_and_write": false, 00:22:18.125 "abort": true, 00:22:18.125 "seek_hole": false, 00:22:18.125 "seek_data": false, 00:22:18.125 "copy": true, 00:22:18.125 "nvme_iov_md": false 00:22:18.125 }, 00:22:18.125 "memory_domains": [ 00:22:18.125 { 00:22:18.125 "dma_device_id": "system", 00:22:18.125 "dma_device_type": 1 00:22:18.125 }, 00:22:18.125 { 00:22:18.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.125 "dma_device_type": 2 00:22:18.125 } 00:22:18.125 ], 00:22:18.125 "driver_specific": {} 00:22:18.125 } 00:22:18.125 ] 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.125 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.383 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.383 "name": "Existed_Raid", 00:22:18.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.383 "strip_size_kb": 0, 00:22:18.383 "state": "configuring", 00:22:18.383 "raid_level": "raid1", 00:22:18.383 "superblock": false, 00:22:18.383 "num_base_bdevs": 4, 00:22:18.383 "num_base_bdevs_discovered": 3, 00:22:18.383 "num_base_bdevs_operational": 4, 00:22:18.383 "base_bdevs_list": [ 00:22:18.383 { 00:22:18.383 "name": "BaseBdev1", 00:22:18.383 "uuid": "939764ed-0294-49d1-a047-2c32b2c7638f", 00:22:18.383 "is_configured": true, 00:22:18.383 "data_offset": 0, 00:22:18.383 "data_size": 65536 00:22:18.383 }, 00:22:18.383 { 00:22:18.383 "name": "BaseBdev2", 00:22:18.383 "uuid": "3dfeebab-cd3f-46f9-896e-59d5d136620b", 00:22:18.383 "is_configured": true, 00:22:18.383 "data_offset": 0, 00:22:18.383 "data_size": 65536 00:22:18.383 }, 00:22:18.383 { 00:22:18.383 "name": "BaseBdev3", 00:22:18.383 "uuid": "56b89fc4-5d06-4bab-9629-26554217cd28", 00:22:18.383 "is_configured": true, 00:22:18.383 "data_offset": 0, 00:22:18.383 "data_size": 65536 00:22:18.383 }, 00:22:18.383 { 00:22:18.383 "name": "BaseBdev4", 00:22:18.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.383 "is_configured": false, 00:22:18.383 "data_offset": 0, 00:22:18.383 "data_size": 0 00:22:18.383 } 00:22:18.383 ] 00:22:18.383 }' 00:22:18.383 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.383 15:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.641 15:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:18.899 [2024-07-23 15:16:14.247016] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:18.899 [2024-07-23 15:16:14.247078] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:22:18.899 [2024-07-23 15:16:14.247088] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:18.899 [2024-07-23 15:16:14.247186] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:22:18.899 [2024-07-23 15:16:14.247535] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:22:18.899 [2024-07-23 15:16:14.247554] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:22:18.899 [2024-07-23 15:16:14.247758] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.899 BaseBdev4 00:22:18.899 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:18.899 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:18.899 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:18.899 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:18.899 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:18.899 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:18.899 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:19.157 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:19.415 [ 00:22:19.415 { 00:22:19.415 "name": "BaseBdev4", 00:22:19.415 "aliases": [ 00:22:19.415 "c0a6e9ea-f10c-4c9c-80e1-662982974419" 00:22:19.415 ], 00:22:19.415 "product_name": "Malloc disk", 00:22:19.415 "block_size": 512, 00:22:19.415 "num_blocks": 65536, 00:22:19.416 "uuid": "c0a6e9ea-f10c-4c9c-80e1-662982974419", 00:22:19.416 "assigned_rate_limits": { 00:22:19.416 "rw_ios_per_sec": 0, 00:22:19.416 "rw_mbytes_per_sec": 0, 00:22:19.416 "r_mbytes_per_sec": 0, 00:22:19.416 "w_mbytes_per_sec": 0 00:22:19.416 }, 00:22:19.416 "claimed": true, 00:22:19.416 "claim_type": "exclusive_write", 00:22:19.416 "zoned": false, 00:22:19.416 "supported_io_types": { 00:22:19.416 "read": true, 00:22:19.416 "write": true, 00:22:19.416 "unmap": true, 00:22:19.416 "flush": true, 00:22:19.416 "reset": true, 00:22:19.416 "nvme_admin": false, 00:22:19.416 "nvme_io": false, 00:22:19.416 "nvme_io_md": false, 00:22:19.416 "write_zeroes": true, 00:22:19.416 "zcopy": true, 00:22:19.416 "get_zone_info": false, 00:22:19.416 "zone_management": false, 00:22:19.416 "zone_append": false, 00:22:19.416 "compare": false, 00:22:19.416 "compare_and_write": false, 00:22:19.416 "abort": true, 00:22:19.416 "seek_hole": false, 00:22:19.416 "seek_data": false, 00:22:19.416 "copy": true, 00:22:19.416 "nvme_iov_md": false 00:22:19.416 }, 00:22:19.416 "memory_domains": [ 00:22:19.416 { 00:22:19.416 "dma_device_id": "system", 00:22:19.416 "dma_device_type": 1 00:22:19.416 }, 00:22:19.416 { 00:22:19.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.416 "dma_device_type": 2 00:22:19.416 } 00:22:19.416 ], 00:22:19.416 "driver_specific": {} 00:22:19.416 } 00:22:19.416 ] 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.416 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.674 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.674 "name": "Existed_Raid", 00:22:19.674 "uuid": "4dcdd0c3-2a6c-4316-8ef7-966e008b8f66", 00:22:19.674 "strip_size_kb": 0, 00:22:19.674 "state": "online", 00:22:19.674 "raid_level": "raid1", 00:22:19.674 "superblock": false, 00:22:19.674 "num_base_bdevs": 4, 00:22:19.674 "num_base_bdevs_discovered": 4, 00:22:19.674 "num_base_bdevs_operational": 4, 00:22:19.674 "base_bdevs_list": [ 00:22:19.674 { 00:22:19.674 "name": "BaseBdev1", 00:22:19.674 "uuid": "939764ed-0294-49d1-a047-2c32b2c7638f", 00:22:19.674 "is_configured": true, 00:22:19.674 "data_offset": 0, 00:22:19.674 "data_size": 65536 00:22:19.674 }, 00:22:19.674 { 00:22:19.674 "name": "BaseBdev2", 00:22:19.674 "uuid": "3dfeebab-cd3f-46f9-896e-59d5d136620b", 00:22:19.674 "is_configured": true, 00:22:19.674 "data_offset": 0, 00:22:19.674 "data_size": 65536 00:22:19.674 }, 00:22:19.674 { 00:22:19.674 "name": "BaseBdev3", 00:22:19.674 "uuid": "56b89fc4-5d06-4bab-9629-26554217cd28", 00:22:19.674 "is_configured": true, 00:22:19.674 "data_offset": 0, 00:22:19.674 "data_size": 65536 00:22:19.674 }, 00:22:19.674 { 00:22:19.674 "name": "BaseBdev4", 00:22:19.674 "uuid": "c0a6e9ea-f10c-4c9c-80e1-662982974419", 00:22:19.674 "is_configured": true, 00:22:19.674 "data_offset": 0, 00:22:19.674 "data_size": 65536 00:22:19.674 } 00:22:19.674 ] 00:22:19.674 }' 00:22:19.674 15:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.674 15:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:19.932 [2024-07-23 15:16:15.275657] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.932 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:19.932 "name": "Existed_Raid", 00:22:19.932 "aliases": [ 00:22:19.932 "4dcdd0c3-2a6c-4316-8ef7-966e008b8f66" 00:22:19.932 ], 00:22:19.932 "product_name": "Raid Volume", 00:22:19.932 "block_size": 512, 00:22:19.932 "num_blocks": 65536, 00:22:19.932 "uuid": "4dcdd0c3-2a6c-4316-8ef7-966e008b8f66", 00:22:19.933 "assigned_rate_limits": { 00:22:19.933 "rw_ios_per_sec": 0, 00:22:19.933 "rw_mbytes_per_sec": 0, 00:22:19.933 "r_mbytes_per_sec": 0, 00:22:19.933 "w_mbytes_per_sec": 0 00:22:19.933 }, 00:22:19.933 "claimed": false, 00:22:19.933 "zoned": false, 00:22:19.933 "supported_io_types": { 00:22:19.933 "read": true, 00:22:19.933 "write": true, 00:22:19.933 "unmap": false, 00:22:19.933 "flush": false, 00:22:19.933 "reset": true, 00:22:19.933 "nvme_admin": false, 00:22:19.933 "nvme_io": false, 00:22:19.933 "nvme_io_md": false, 00:22:19.933 "write_zeroes": true, 00:22:19.933 "zcopy": false, 00:22:19.933 "get_zone_info": false, 00:22:19.933 "zone_management": false, 00:22:19.933 "zone_append": false, 00:22:19.933 "compare": false, 00:22:19.933 "compare_and_write": false, 00:22:19.933 "abort": false, 00:22:19.933 "seek_hole": false, 00:22:19.933 "seek_data": false, 00:22:19.933 "copy": false, 00:22:19.933 "nvme_iov_md": false 00:22:19.933 }, 00:22:19.933 "memory_domains": [ 00:22:19.933 { 00:22:19.933 "dma_device_id": "system", 00:22:19.933 "dma_device_type": 1 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.933 "dma_device_type": 2 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "dma_device_id": "system", 00:22:19.933 "dma_device_type": 1 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.933 "dma_device_type": 2 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "dma_device_id": "system", 00:22:19.933 "dma_device_type": 1 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.933 "dma_device_type": 2 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "dma_device_id": "system", 00:22:19.933 "dma_device_type": 1 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.933 "dma_device_type": 2 00:22:19.933 } 00:22:19.933 ], 00:22:19.933 "driver_specific": { 00:22:19.933 "raid": { 00:22:19.933 "uuid": "4dcdd0c3-2a6c-4316-8ef7-966e008b8f66", 00:22:19.933 "strip_size_kb": 0, 00:22:19.933 "state": "online", 00:22:19.933 "raid_level": "raid1", 00:22:19.933 "superblock": false, 00:22:19.933 "num_base_bdevs": 4, 00:22:19.933 "num_base_bdevs_discovered": 4, 00:22:19.933 "num_base_bdevs_operational": 4, 00:22:19.933 "base_bdevs_list": [ 00:22:19.933 { 00:22:19.933 "name": "BaseBdev1", 00:22:19.933 "uuid": "939764ed-0294-49d1-a047-2c32b2c7638f", 00:22:19.933 "is_configured": true, 00:22:19.933 "data_offset": 0, 00:22:19.933 "data_size": 65536 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "name": "BaseBdev2", 00:22:19.933 "uuid": "3dfeebab-cd3f-46f9-896e-59d5d136620b", 00:22:19.933 "is_configured": true, 00:22:19.933 "data_offset": 0, 00:22:19.933 "data_size": 65536 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "name": "BaseBdev3", 00:22:19.933 "uuid": "56b89fc4-5d06-4bab-9629-26554217cd28", 00:22:19.933 "is_configured": true, 00:22:19.933 "data_offset": 0, 00:22:19.933 "data_size": 65536 00:22:19.933 }, 00:22:19.933 { 00:22:19.933 "name": "BaseBdev4", 00:22:19.933 "uuid": "c0a6e9ea-f10c-4c9c-80e1-662982974419", 00:22:19.933 "is_configured": true, 00:22:19.933 "data_offset": 0, 00:22:19.933 "data_size": 65536 00:22:19.933 } 00:22:19.933 ] 00:22:19.933 } 00:22:19.933 } 00:22:19.933 }' 00:22:19.933 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.933 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:19.933 BaseBdev2 00:22:19.933 BaseBdev3 00:22:19.933 BaseBdev4' 00:22:19.933 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.933 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:19.933 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.192 "name": "BaseBdev1", 00:22:20.192 "aliases": [ 00:22:20.192 "939764ed-0294-49d1-a047-2c32b2c7638f" 00:22:20.192 ], 00:22:20.192 "product_name": "Malloc disk", 00:22:20.192 "block_size": 512, 00:22:20.192 "num_blocks": 65536, 00:22:20.192 "uuid": "939764ed-0294-49d1-a047-2c32b2c7638f", 00:22:20.192 "assigned_rate_limits": { 00:22:20.192 "rw_ios_per_sec": 0, 00:22:20.192 "rw_mbytes_per_sec": 0, 00:22:20.192 "r_mbytes_per_sec": 0, 00:22:20.192 "w_mbytes_per_sec": 0 00:22:20.192 }, 00:22:20.192 "claimed": true, 00:22:20.192 "claim_type": "exclusive_write", 00:22:20.192 "zoned": false, 00:22:20.192 "supported_io_types": { 00:22:20.192 "read": true, 00:22:20.192 "write": true, 00:22:20.192 "unmap": true, 00:22:20.192 "flush": true, 00:22:20.192 "reset": true, 00:22:20.192 "nvme_admin": false, 00:22:20.192 "nvme_io": false, 00:22:20.192 "nvme_io_md": false, 00:22:20.192 "write_zeroes": true, 00:22:20.192 "zcopy": true, 00:22:20.192 "get_zone_info": false, 00:22:20.192 "zone_management": false, 00:22:20.192 "zone_append": false, 00:22:20.192 "compare": false, 00:22:20.192 "compare_and_write": false, 00:22:20.192 "abort": true, 00:22:20.192 "seek_hole": false, 00:22:20.192 "seek_data": false, 00:22:20.192 "copy": true, 00:22:20.192 "nvme_iov_md": false 00:22:20.192 }, 00:22:20.192 "memory_domains": [ 00:22:20.192 { 00:22:20.192 "dma_device_id": "system", 00:22:20.192 "dma_device_type": 1 00:22:20.192 }, 00:22:20.192 { 00:22:20.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.192 "dma_device_type": 2 00:22:20.192 } 00:22:20.192 ], 00:22:20.192 "driver_specific": {} 00:22:20.192 }' 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:20.192 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.451 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.451 "name": "BaseBdev2", 00:22:20.451 "aliases": [ 00:22:20.451 "3dfeebab-cd3f-46f9-896e-59d5d136620b" 00:22:20.451 ], 00:22:20.451 "product_name": "Malloc disk", 00:22:20.451 "block_size": 512, 00:22:20.451 "num_blocks": 65536, 00:22:20.451 "uuid": "3dfeebab-cd3f-46f9-896e-59d5d136620b", 00:22:20.451 "assigned_rate_limits": { 00:22:20.451 "rw_ios_per_sec": 0, 00:22:20.451 "rw_mbytes_per_sec": 0, 00:22:20.451 "r_mbytes_per_sec": 0, 00:22:20.451 "w_mbytes_per_sec": 0 00:22:20.451 }, 00:22:20.451 "claimed": true, 00:22:20.451 "claim_type": "exclusive_write", 00:22:20.451 "zoned": false, 00:22:20.451 "supported_io_types": { 00:22:20.451 "read": true, 00:22:20.451 "write": true, 00:22:20.451 "unmap": true, 00:22:20.451 "flush": true, 00:22:20.451 "reset": true, 00:22:20.451 "nvme_admin": false, 00:22:20.451 "nvme_io": false, 00:22:20.451 "nvme_io_md": false, 00:22:20.451 "write_zeroes": true, 00:22:20.451 "zcopy": true, 00:22:20.451 "get_zone_info": false, 00:22:20.451 "zone_management": false, 00:22:20.451 "zone_append": false, 00:22:20.451 "compare": false, 00:22:20.451 "compare_and_write": false, 00:22:20.451 "abort": true, 00:22:20.451 "seek_hole": false, 00:22:20.451 "seek_data": false, 00:22:20.451 "copy": true, 00:22:20.451 "nvme_iov_md": false 00:22:20.451 }, 00:22:20.451 "memory_domains": [ 00:22:20.451 { 00:22:20.451 "dma_device_id": "system", 00:22:20.451 "dma_device_type": 1 00:22:20.451 }, 00:22:20.451 { 00:22:20.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.451 "dma_device_type": 2 00:22:20.451 } 00:22:20.451 ], 00:22:20.451 "driver_specific": {} 00:22:20.451 }' 00:22:20.451 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.451 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.451 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.451 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.451 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:20.710 15:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.710 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.710 "name": "BaseBdev3", 00:22:20.710 "aliases": [ 00:22:20.710 "56b89fc4-5d06-4bab-9629-26554217cd28" 00:22:20.710 ], 00:22:20.710 "product_name": "Malloc disk", 00:22:20.710 "block_size": 512, 00:22:20.710 "num_blocks": 65536, 00:22:20.710 "uuid": "56b89fc4-5d06-4bab-9629-26554217cd28", 00:22:20.710 "assigned_rate_limits": { 00:22:20.710 "rw_ios_per_sec": 0, 00:22:20.710 "rw_mbytes_per_sec": 0, 00:22:20.710 "r_mbytes_per_sec": 0, 00:22:20.710 "w_mbytes_per_sec": 0 00:22:20.710 }, 00:22:20.710 "claimed": true, 00:22:20.710 "claim_type": "exclusive_write", 00:22:20.710 "zoned": false, 00:22:20.710 "supported_io_types": { 00:22:20.710 "read": true, 00:22:20.710 "write": true, 00:22:20.710 "unmap": true, 00:22:20.710 "flush": true, 00:22:20.710 "reset": true, 00:22:20.710 "nvme_admin": false, 00:22:20.710 "nvme_io": false, 00:22:20.710 "nvme_io_md": false, 00:22:20.710 "write_zeroes": true, 00:22:20.710 "zcopy": true, 00:22:20.710 "get_zone_info": false, 00:22:20.710 "zone_management": false, 00:22:20.710 "zone_append": false, 00:22:20.710 "compare": false, 00:22:20.710 "compare_and_write": false, 00:22:20.710 "abort": true, 00:22:20.710 "seek_hole": false, 00:22:20.710 "seek_data": false, 00:22:20.710 "copy": true, 00:22:20.710 "nvme_iov_md": false 00:22:20.710 }, 00:22:20.710 "memory_domains": [ 00:22:20.710 { 00:22:20.710 "dma_device_id": "system", 00:22:20.710 "dma_device_type": 1 00:22:20.710 }, 00:22:20.710 { 00:22:20.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.710 "dma_device_type": 2 00:22:20.710 } 00:22:20.710 ], 00:22:20.710 "driver_specific": {} 00:22:20.710 }' 00:22:20.710 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.710 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.710 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.710 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:20.969 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.228 "name": "BaseBdev4", 00:22:21.228 "aliases": [ 00:22:21.228 "c0a6e9ea-f10c-4c9c-80e1-662982974419" 00:22:21.228 ], 00:22:21.228 "product_name": "Malloc disk", 00:22:21.228 "block_size": 512, 00:22:21.228 "num_blocks": 65536, 00:22:21.228 "uuid": "c0a6e9ea-f10c-4c9c-80e1-662982974419", 00:22:21.228 "assigned_rate_limits": { 00:22:21.228 "rw_ios_per_sec": 0, 00:22:21.228 "rw_mbytes_per_sec": 0, 00:22:21.228 "r_mbytes_per_sec": 0, 00:22:21.228 "w_mbytes_per_sec": 0 00:22:21.228 }, 00:22:21.228 "claimed": true, 00:22:21.228 "claim_type": "exclusive_write", 00:22:21.228 "zoned": false, 00:22:21.228 "supported_io_types": { 00:22:21.228 "read": true, 00:22:21.228 "write": true, 00:22:21.228 "unmap": true, 00:22:21.228 "flush": true, 00:22:21.228 "reset": true, 00:22:21.228 "nvme_admin": false, 00:22:21.228 "nvme_io": false, 00:22:21.228 "nvme_io_md": false, 00:22:21.228 "write_zeroes": true, 00:22:21.228 "zcopy": true, 00:22:21.228 "get_zone_info": false, 00:22:21.228 "zone_management": false, 00:22:21.228 "zone_append": false, 00:22:21.228 "compare": false, 00:22:21.228 "compare_and_write": false, 00:22:21.228 "abort": true, 00:22:21.228 "seek_hole": false, 00:22:21.228 "seek_data": false, 00:22:21.228 "copy": true, 00:22:21.228 "nvme_iov_md": false 00:22:21.228 }, 00:22:21.228 "memory_domains": [ 00:22:21.228 { 00:22:21.228 "dma_device_id": "system", 00:22:21.228 "dma_device_type": 1 00:22:21.228 }, 00:22:21.228 { 00:22:21.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.228 "dma_device_type": 2 00:22:21.228 } 00:22:21.228 ], 00:22:21.228 "driver_specific": {} 00:22:21.228 }' 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.228 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:21.487 [2024-07-23 15:16:16.811743] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:21.487 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:21.487 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.488 15:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.746 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.746 "name": "Existed_Raid", 00:22:21.746 "uuid": "4dcdd0c3-2a6c-4316-8ef7-966e008b8f66", 00:22:21.746 "strip_size_kb": 0, 00:22:21.746 "state": "online", 00:22:21.746 "raid_level": "raid1", 00:22:21.746 "superblock": false, 00:22:21.746 "num_base_bdevs": 4, 00:22:21.746 "num_base_bdevs_discovered": 3, 00:22:21.746 "num_base_bdevs_operational": 3, 00:22:21.746 "base_bdevs_list": [ 00:22:21.746 { 00:22:21.746 "name": null, 00:22:21.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.746 "is_configured": false, 00:22:21.746 "data_offset": 0, 00:22:21.746 "data_size": 65536 00:22:21.746 }, 00:22:21.746 { 00:22:21.746 "name": "BaseBdev2", 00:22:21.746 "uuid": "3dfeebab-cd3f-46f9-896e-59d5d136620b", 00:22:21.746 "is_configured": true, 00:22:21.746 "data_offset": 0, 00:22:21.746 "data_size": 65536 00:22:21.746 }, 00:22:21.746 { 00:22:21.746 "name": "BaseBdev3", 00:22:21.746 "uuid": "56b89fc4-5d06-4bab-9629-26554217cd28", 00:22:21.746 "is_configured": true, 00:22:21.746 "data_offset": 0, 00:22:21.746 "data_size": 65536 00:22:21.747 }, 00:22:21.747 { 00:22:21.747 "name": "BaseBdev4", 00:22:21.747 "uuid": "c0a6e9ea-f10c-4c9c-80e1-662982974419", 00:22:21.747 "is_configured": true, 00:22:21.747 "data_offset": 0, 00:22:21.747 "data_size": 65536 00:22:21.747 } 00:22:21.747 ] 00:22:21.747 }' 00:22:21.747 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.747 15:16:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.312 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:22.312 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:22.312 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.312 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:22.312 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:22.312 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:22.312 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:22.597 [2024-07-23 15:16:17.888435] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:22.597 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:22.597 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:22.597 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.597 15:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:22.855 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:22.855 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:22.855 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:22.855 [2024-07-23 15:16:18.269011] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:23.113 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:23.113 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:23.113 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.113 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:23.371 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:23.372 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:23.372 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:23.372 [2024-07-23 15:16:18.709492] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:23.372 [2024-07-23 15:16:18.709602] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:23.372 [2024-07-23 15:16:18.722215] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.372 [2024-07-23 15:16:18.722271] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.372 [2024-07-23 15:16:18.722286] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:22:23.372 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:23.372 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:23.372 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.372 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:23.629 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:23.629 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:23.629 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:23.629 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:23.629 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:23.629 15:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:23.887 BaseBdev2 00:22:23.887 15:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:23.887 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:23.887 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:23.887 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:23.887 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:23.887 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:23.887 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:24.145 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:24.145 [ 00:22:24.145 { 00:22:24.145 "name": "BaseBdev2", 00:22:24.145 "aliases": [ 00:22:24.145 "fe693153-0225-4081-a4e3-3e6892a8ce6e" 00:22:24.145 ], 00:22:24.146 "product_name": "Malloc disk", 00:22:24.146 "block_size": 512, 00:22:24.146 "num_blocks": 65536, 00:22:24.146 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:24.146 "assigned_rate_limits": { 00:22:24.146 "rw_ios_per_sec": 0, 00:22:24.146 "rw_mbytes_per_sec": 0, 00:22:24.146 "r_mbytes_per_sec": 0, 00:22:24.146 "w_mbytes_per_sec": 0 00:22:24.146 }, 00:22:24.146 "claimed": false, 00:22:24.146 "zoned": false, 00:22:24.146 "supported_io_types": { 00:22:24.146 "read": true, 00:22:24.146 "write": true, 00:22:24.146 "unmap": true, 00:22:24.146 "flush": true, 00:22:24.146 "reset": true, 00:22:24.146 "nvme_admin": false, 00:22:24.146 "nvme_io": false, 00:22:24.146 "nvme_io_md": false, 00:22:24.146 "write_zeroes": true, 00:22:24.146 "zcopy": true, 00:22:24.146 "get_zone_info": false, 00:22:24.146 "zone_management": false, 00:22:24.146 "zone_append": false, 00:22:24.146 "compare": false, 00:22:24.146 "compare_and_write": false, 00:22:24.146 "abort": true, 00:22:24.146 "seek_hole": false, 00:22:24.146 "seek_data": false, 00:22:24.146 "copy": true, 00:22:24.146 "nvme_iov_md": false 00:22:24.146 }, 00:22:24.146 "memory_domains": [ 00:22:24.146 { 00:22:24.146 "dma_device_id": "system", 00:22:24.146 "dma_device_type": 1 00:22:24.146 }, 00:22:24.146 { 00:22:24.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.146 "dma_device_type": 2 00:22:24.146 } 00:22:24.146 ], 00:22:24.146 "driver_specific": {} 00:22:24.146 } 00:22:24.146 ] 00:22:24.146 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:24.146 15:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:24.146 15:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:24.146 15:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:24.403 BaseBdev3 00:22:24.403 15:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:24.403 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:24.403 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:24.403 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:24.403 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:24.403 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:24.403 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:24.661 15:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:24.918 [ 00:22:24.918 { 00:22:24.918 "name": "BaseBdev3", 00:22:24.918 "aliases": [ 00:22:24.918 "613fe324-4f10-4bbc-be2a-d26c0d6d49bf" 00:22:24.918 ], 00:22:24.919 "product_name": "Malloc disk", 00:22:24.919 "block_size": 512, 00:22:24.919 "num_blocks": 65536, 00:22:24.919 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:24.919 "assigned_rate_limits": { 00:22:24.919 "rw_ios_per_sec": 0, 00:22:24.919 "rw_mbytes_per_sec": 0, 00:22:24.919 "r_mbytes_per_sec": 0, 00:22:24.919 "w_mbytes_per_sec": 0 00:22:24.919 }, 00:22:24.919 "claimed": false, 00:22:24.919 "zoned": false, 00:22:24.919 "supported_io_types": { 00:22:24.919 "read": true, 00:22:24.919 "write": true, 00:22:24.919 "unmap": true, 00:22:24.919 "flush": true, 00:22:24.919 "reset": true, 00:22:24.919 "nvme_admin": false, 00:22:24.919 "nvme_io": false, 00:22:24.919 "nvme_io_md": false, 00:22:24.919 "write_zeroes": true, 00:22:24.919 "zcopy": true, 00:22:24.919 "get_zone_info": false, 00:22:24.919 "zone_management": false, 00:22:24.919 "zone_append": false, 00:22:24.919 "compare": false, 00:22:24.919 "compare_and_write": false, 00:22:24.919 "abort": true, 00:22:24.919 "seek_hole": false, 00:22:24.919 "seek_data": false, 00:22:24.919 "copy": true, 00:22:24.919 "nvme_iov_md": false 00:22:24.919 }, 00:22:24.919 "memory_domains": [ 00:22:24.919 { 00:22:24.919 "dma_device_id": "system", 00:22:24.919 "dma_device_type": 1 00:22:24.919 }, 00:22:24.919 { 00:22:24.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.919 "dma_device_type": 2 00:22:24.919 } 00:22:24.919 ], 00:22:24.919 "driver_specific": {} 00:22:24.919 } 00:22:24.919 ] 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:24.919 BaseBdev4 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:24.919 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:25.176 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:25.434 [ 00:22:25.434 { 00:22:25.434 "name": "BaseBdev4", 00:22:25.434 "aliases": [ 00:22:25.434 "968c0e20-1c07-485a-a1cb-6a6cd483bd29" 00:22:25.434 ], 00:22:25.434 "product_name": "Malloc disk", 00:22:25.434 "block_size": 512, 00:22:25.434 "num_blocks": 65536, 00:22:25.434 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:25.434 "assigned_rate_limits": { 00:22:25.434 "rw_ios_per_sec": 0, 00:22:25.434 "rw_mbytes_per_sec": 0, 00:22:25.434 "r_mbytes_per_sec": 0, 00:22:25.434 "w_mbytes_per_sec": 0 00:22:25.434 }, 00:22:25.434 "claimed": false, 00:22:25.434 "zoned": false, 00:22:25.434 "supported_io_types": { 00:22:25.434 "read": true, 00:22:25.434 "write": true, 00:22:25.434 "unmap": true, 00:22:25.434 "flush": true, 00:22:25.434 "reset": true, 00:22:25.434 "nvme_admin": false, 00:22:25.434 "nvme_io": false, 00:22:25.434 "nvme_io_md": false, 00:22:25.434 "write_zeroes": true, 00:22:25.434 "zcopy": true, 00:22:25.434 "get_zone_info": false, 00:22:25.434 "zone_management": false, 00:22:25.434 "zone_append": false, 00:22:25.434 "compare": false, 00:22:25.434 "compare_and_write": false, 00:22:25.434 "abort": true, 00:22:25.434 "seek_hole": false, 00:22:25.434 "seek_data": false, 00:22:25.434 "copy": true, 00:22:25.434 "nvme_iov_md": false 00:22:25.434 }, 00:22:25.434 "memory_domains": [ 00:22:25.434 { 00:22:25.434 "dma_device_id": "system", 00:22:25.434 "dma_device_type": 1 00:22:25.434 }, 00:22:25.434 { 00:22:25.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.434 "dma_device_type": 2 00:22:25.434 } 00:22:25.434 ], 00:22:25.434 "driver_specific": {} 00:22:25.434 } 00:22:25.434 ] 00:22:25.434 15:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:25.434 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:25.434 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:25.434 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:25.434 [2024-07-23 15:16:20.854619] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:25.434 [2024-07-23 15:16:20.854685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:25.434 [2024-07-23 15:16:20.854711] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.434 [2024-07-23 15:16:20.856892] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.434 [2024-07-23 15:16:20.856950] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.691 15:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.691 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.691 "name": "Existed_Raid", 00:22:25.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.691 "strip_size_kb": 0, 00:22:25.691 "state": "configuring", 00:22:25.691 "raid_level": "raid1", 00:22:25.691 "superblock": false, 00:22:25.691 "num_base_bdevs": 4, 00:22:25.691 "num_base_bdevs_discovered": 3, 00:22:25.691 "num_base_bdevs_operational": 4, 00:22:25.691 "base_bdevs_list": [ 00:22:25.691 { 00:22:25.691 "name": "BaseBdev1", 00:22:25.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.691 "is_configured": false, 00:22:25.691 "data_offset": 0, 00:22:25.691 "data_size": 0 00:22:25.691 }, 00:22:25.691 { 00:22:25.691 "name": "BaseBdev2", 00:22:25.691 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:25.691 "is_configured": true, 00:22:25.691 "data_offset": 0, 00:22:25.691 "data_size": 65536 00:22:25.691 }, 00:22:25.691 { 00:22:25.691 "name": "BaseBdev3", 00:22:25.691 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:25.691 "is_configured": true, 00:22:25.691 "data_offset": 0, 00:22:25.691 "data_size": 65536 00:22:25.691 }, 00:22:25.691 { 00:22:25.691 "name": "BaseBdev4", 00:22:25.691 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:25.691 "is_configured": true, 00:22:25.691 "data_offset": 0, 00:22:25.691 "data_size": 65536 00:22:25.691 } 00:22:25.691 ] 00:22:25.691 }' 00:22:25.691 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.691 15:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:26.257 [2024-07-23 15:16:21.594767] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.257 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.515 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.515 "name": "Existed_Raid", 00:22:26.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.515 "strip_size_kb": 0, 00:22:26.515 "state": "configuring", 00:22:26.515 "raid_level": "raid1", 00:22:26.515 "superblock": false, 00:22:26.515 "num_base_bdevs": 4, 00:22:26.515 "num_base_bdevs_discovered": 2, 00:22:26.515 "num_base_bdevs_operational": 4, 00:22:26.515 "base_bdevs_list": [ 00:22:26.515 { 00:22:26.515 "name": "BaseBdev1", 00:22:26.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.515 "is_configured": false, 00:22:26.515 "data_offset": 0, 00:22:26.515 "data_size": 0 00:22:26.515 }, 00:22:26.515 { 00:22:26.515 "name": null, 00:22:26.515 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:26.515 "is_configured": false, 00:22:26.515 "data_offset": 0, 00:22:26.515 "data_size": 65536 00:22:26.515 }, 00:22:26.515 { 00:22:26.515 "name": "BaseBdev3", 00:22:26.515 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:26.515 "is_configured": true, 00:22:26.515 "data_offset": 0, 00:22:26.515 "data_size": 65536 00:22:26.515 }, 00:22:26.515 { 00:22:26.515 "name": "BaseBdev4", 00:22:26.515 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:26.515 "is_configured": true, 00:22:26.515 "data_offset": 0, 00:22:26.515 "data_size": 65536 00:22:26.515 } 00:22:26.515 ] 00:22:26.515 }' 00:22:26.515 15:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.515 15:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.774 15:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:26.774 15:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.033 15:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:27.033 15:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:27.291 [2024-07-23 15:16:22.590220] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:27.291 BaseBdev1 00:22:27.291 15:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:27.291 15:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:27.291 15:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:27.291 15:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:27.291 15:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:27.291 15:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:27.291 15:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:27.549 15:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:27.807 [ 00:22:27.807 { 00:22:27.807 "name": "BaseBdev1", 00:22:27.807 "aliases": [ 00:22:27.807 "2cf457bf-e004-421b-9872-728a60fb458b" 00:22:27.807 ], 00:22:27.807 "product_name": "Malloc disk", 00:22:27.807 "block_size": 512, 00:22:27.807 "num_blocks": 65536, 00:22:27.807 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:27.807 "assigned_rate_limits": { 00:22:27.807 "rw_ios_per_sec": 0, 00:22:27.807 "rw_mbytes_per_sec": 0, 00:22:27.807 "r_mbytes_per_sec": 0, 00:22:27.807 "w_mbytes_per_sec": 0 00:22:27.807 }, 00:22:27.807 "claimed": true, 00:22:27.807 "claim_type": "exclusive_write", 00:22:27.807 "zoned": false, 00:22:27.807 "supported_io_types": { 00:22:27.807 "read": true, 00:22:27.807 "write": true, 00:22:27.807 "unmap": true, 00:22:27.807 "flush": true, 00:22:27.807 "reset": true, 00:22:27.807 "nvme_admin": false, 00:22:27.807 "nvme_io": false, 00:22:27.807 "nvme_io_md": false, 00:22:27.807 "write_zeroes": true, 00:22:27.807 "zcopy": true, 00:22:27.807 "get_zone_info": false, 00:22:27.807 "zone_management": false, 00:22:27.807 "zone_append": false, 00:22:27.807 "compare": false, 00:22:27.807 "compare_and_write": false, 00:22:27.807 "abort": true, 00:22:27.807 "seek_hole": false, 00:22:27.807 "seek_data": false, 00:22:27.807 "copy": true, 00:22:27.807 "nvme_iov_md": false 00:22:27.807 }, 00:22:27.807 "memory_domains": [ 00:22:27.807 { 00:22:27.807 "dma_device_id": "system", 00:22:27.807 "dma_device_type": 1 00:22:27.807 }, 00:22:27.807 { 00:22:27.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.807 "dma_device_type": 2 00:22:27.807 } 00:22:27.807 ], 00:22:27.807 "driver_specific": {} 00:22:27.807 } 00:22:27.807 ] 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.807 "name": "Existed_Raid", 00:22:27.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.807 "strip_size_kb": 0, 00:22:27.807 "state": "configuring", 00:22:27.807 "raid_level": "raid1", 00:22:27.807 "superblock": false, 00:22:27.807 "num_base_bdevs": 4, 00:22:27.807 "num_base_bdevs_discovered": 3, 00:22:27.807 "num_base_bdevs_operational": 4, 00:22:27.807 "base_bdevs_list": [ 00:22:27.807 { 00:22:27.807 "name": "BaseBdev1", 00:22:27.807 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:27.807 "is_configured": true, 00:22:27.807 "data_offset": 0, 00:22:27.807 "data_size": 65536 00:22:27.807 }, 00:22:27.807 { 00:22:27.807 "name": null, 00:22:27.807 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:27.807 "is_configured": false, 00:22:27.807 "data_offset": 0, 00:22:27.807 "data_size": 65536 00:22:27.807 }, 00:22:27.807 { 00:22:27.807 "name": "BaseBdev3", 00:22:27.807 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:27.807 "is_configured": true, 00:22:27.807 "data_offset": 0, 00:22:27.807 "data_size": 65536 00:22:27.807 }, 00:22:27.807 { 00:22:27.807 "name": "BaseBdev4", 00:22:27.807 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:27.807 "is_configured": true, 00:22:27.807 "data_offset": 0, 00:22:27.807 "data_size": 65536 00:22:27.807 } 00:22:27.807 ] 00:22:27.807 }' 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.807 15:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.375 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.375 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:28.375 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:28.375 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:28.633 [2024-07-23 15:16:23.902619] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.633 15:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.890 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.890 "name": "Existed_Raid", 00:22:28.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.890 "strip_size_kb": 0, 00:22:28.890 "state": "configuring", 00:22:28.890 "raid_level": "raid1", 00:22:28.890 "superblock": false, 00:22:28.890 "num_base_bdevs": 4, 00:22:28.890 "num_base_bdevs_discovered": 2, 00:22:28.890 "num_base_bdevs_operational": 4, 00:22:28.890 "base_bdevs_list": [ 00:22:28.890 { 00:22:28.890 "name": "BaseBdev1", 00:22:28.890 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:28.890 "is_configured": true, 00:22:28.890 "data_offset": 0, 00:22:28.890 "data_size": 65536 00:22:28.890 }, 00:22:28.890 { 00:22:28.890 "name": null, 00:22:28.890 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:28.890 "is_configured": false, 00:22:28.890 "data_offset": 0, 00:22:28.890 "data_size": 65536 00:22:28.890 }, 00:22:28.890 { 00:22:28.890 "name": null, 00:22:28.890 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:28.890 "is_configured": false, 00:22:28.890 "data_offset": 0, 00:22:28.890 "data_size": 65536 00:22:28.890 }, 00:22:28.890 { 00:22:28.890 "name": "BaseBdev4", 00:22:28.890 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:28.890 "is_configured": true, 00:22:28.890 "data_offset": 0, 00:22:28.890 "data_size": 65536 00:22:28.890 } 00:22:28.890 ] 00:22:28.890 }' 00:22:28.890 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.890 15:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.147 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:29.147 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:29.433 [2024-07-23 15:16:24.786815] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.433 15:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.692 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.692 "name": "Existed_Raid", 00:22:29.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.692 "strip_size_kb": 0, 00:22:29.692 "state": "configuring", 00:22:29.692 "raid_level": "raid1", 00:22:29.692 "superblock": false, 00:22:29.692 "num_base_bdevs": 4, 00:22:29.692 "num_base_bdevs_discovered": 3, 00:22:29.692 "num_base_bdevs_operational": 4, 00:22:29.692 "base_bdevs_list": [ 00:22:29.692 { 00:22:29.692 "name": "BaseBdev1", 00:22:29.692 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:29.692 "is_configured": true, 00:22:29.692 "data_offset": 0, 00:22:29.692 "data_size": 65536 00:22:29.692 }, 00:22:29.692 { 00:22:29.692 "name": null, 00:22:29.692 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:29.692 "is_configured": false, 00:22:29.692 "data_offset": 0, 00:22:29.692 "data_size": 65536 00:22:29.692 }, 00:22:29.692 { 00:22:29.692 "name": "BaseBdev3", 00:22:29.692 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:29.692 "is_configured": true, 00:22:29.692 "data_offset": 0, 00:22:29.692 "data_size": 65536 00:22:29.692 }, 00:22:29.692 { 00:22:29.692 "name": "BaseBdev4", 00:22:29.692 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:29.692 "is_configured": true, 00:22:29.692 "data_offset": 0, 00:22:29.692 "data_size": 65536 00:22:29.692 } 00:22:29.692 ] 00:22:29.692 }' 00:22:29.692 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.692 15:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.953 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.953 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:30.211 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:30.211 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:30.211 [2024-07-23 15:16:25.635102] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.470 "name": "Existed_Raid", 00:22:30.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.470 "strip_size_kb": 0, 00:22:30.470 "state": "configuring", 00:22:30.470 "raid_level": "raid1", 00:22:30.470 "superblock": false, 00:22:30.470 "num_base_bdevs": 4, 00:22:30.470 "num_base_bdevs_discovered": 2, 00:22:30.470 "num_base_bdevs_operational": 4, 00:22:30.470 "base_bdevs_list": [ 00:22:30.470 { 00:22:30.470 "name": null, 00:22:30.470 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:30.470 "is_configured": false, 00:22:30.470 "data_offset": 0, 00:22:30.470 "data_size": 65536 00:22:30.470 }, 00:22:30.470 { 00:22:30.470 "name": null, 00:22:30.470 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:30.470 "is_configured": false, 00:22:30.470 "data_offset": 0, 00:22:30.470 "data_size": 65536 00:22:30.470 }, 00:22:30.470 { 00:22:30.470 "name": "BaseBdev3", 00:22:30.470 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:30.470 "is_configured": true, 00:22:30.470 "data_offset": 0, 00:22:30.470 "data_size": 65536 00:22:30.470 }, 00:22:30.470 { 00:22:30.470 "name": "BaseBdev4", 00:22:30.470 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:30.470 "is_configured": true, 00:22:30.470 "data_offset": 0, 00:22:30.470 "data_size": 65536 00:22:30.470 } 00:22:30.470 ] 00:22:30.470 }' 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.470 15:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.037 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.037 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:31.037 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:31.037 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:31.295 [2024-07-23 15:16:26.567501] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.295 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.552 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.552 "name": "Existed_Raid", 00:22:31.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.552 "strip_size_kb": 0, 00:22:31.552 "state": "configuring", 00:22:31.552 "raid_level": "raid1", 00:22:31.552 "superblock": false, 00:22:31.552 "num_base_bdevs": 4, 00:22:31.552 "num_base_bdevs_discovered": 3, 00:22:31.552 "num_base_bdevs_operational": 4, 00:22:31.552 "base_bdevs_list": [ 00:22:31.552 { 00:22:31.552 "name": null, 00:22:31.552 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:31.552 "is_configured": false, 00:22:31.552 "data_offset": 0, 00:22:31.552 "data_size": 65536 00:22:31.552 }, 00:22:31.552 { 00:22:31.552 "name": "BaseBdev2", 00:22:31.552 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:31.552 "is_configured": true, 00:22:31.552 "data_offset": 0, 00:22:31.552 "data_size": 65536 00:22:31.552 }, 00:22:31.553 { 00:22:31.553 "name": "BaseBdev3", 00:22:31.553 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:31.553 "is_configured": true, 00:22:31.553 "data_offset": 0, 00:22:31.553 "data_size": 65536 00:22:31.553 }, 00:22:31.553 { 00:22:31.553 "name": "BaseBdev4", 00:22:31.553 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:31.553 "is_configured": true, 00:22:31.553 "data_offset": 0, 00:22:31.553 "data_size": 65536 00:22:31.553 } 00:22:31.553 ] 00:22:31.553 }' 00:22:31.553 15:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.553 15:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.810 15:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.810 15:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:32.067 15:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:32.067 15:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:32.067 15:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.324 15:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2cf457bf-e004-421b-9872-728a60fb458b 00:22:32.580 [2024-07-23 15:16:27.895024] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:32.580 [2024-07-23 15:16:27.895078] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:22:32.580 [2024-07-23 15:16:27.895090] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:32.580 [2024-07-23 15:16:27.895168] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:22:32.580 [2024-07-23 15:16:27.895457] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:22:32.580 [2024-07-23 15:16:27.895470] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008180 00:22:32.580 [2024-07-23 15:16:27.895646] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.580 NewBaseBdev 00:22:32.580 15:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:32.580 15:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:32.580 15:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:32.580 15:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:32.580 15:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:32.580 15:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:32.580 15:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:32.837 15:16:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:32.837 [ 00:22:32.837 { 00:22:32.837 "name": "NewBaseBdev", 00:22:32.837 "aliases": [ 00:22:32.837 "2cf457bf-e004-421b-9872-728a60fb458b" 00:22:32.837 ], 00:22:32.837 "product_name": "Malloc disk", 00:22:32.837 "block_size": 512, 00:22:32.837 "num_blocks": 65536, 00:22:32.837 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:32.837 "assigned_rate_limits": { 00:22:32.837 "rw_ios_per_sec": 0, 00:22:32.837 "rw_mbytes_per_sec": 0, 00:22:32.837 "r_mbytes_per_sec": 0, 00:22:32.837 "w_mbytes_per_sec": 0 00:22:32.837 }, 00:22:32.837 "claimed": true, 00:22:32.837 "claim_type": "exclusive_write", 00:22:32.837 "zoned": false, 00:22:32.837 "supported_io_types": { 00:22:32.837 "read": true, 00:22:32.837 "write": true, 00:22:32.837 "unmap": true, 00:22:32.837 "flush": true, 00:22:32.837 "reset": true, 00:22:32.837 "nvme_admin": false, 00:22:32.837 "nvme_io": false, 00:22:32.837 "nvme_io_md": false, 00:22:32.837 "write_zeroes": true, 00:22:32.837 "zcopy": true, 00:22:32.837 "get_zone_info": false, 00:22:32.837 "zone_management": false, 00:22:32.837 "zone_append": false, 00:22:32.837 "compare": false, 00:22:32.837 "compare_and_write": false, 00:22:32.837 "abort": true, 00:22:32.837 "seek_hole": false, 00:22:32.837 "seek_data": false, 00:22:32.837 "copy": true, 00:22:32.837 "nvme_iov_md": false 00:22:32.837 }, 00:22:32.837 "memory_domains": [ 00:22:32.837 { 00:22:32.837 "dma_device_id": "system", 00:22:32.837 "dma_device_type": 1 00:22:32.837 }, 00:22:32.837 { 00:22:32.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.837 "dma_device_type": 2 00:22:32.837 } 00:22:32.837 ], 00:22:32.837 "driver_specific": {} 00:22:32.837 } 00:22:32.837 ] 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.094 "name": "Existed_Raid", 00:22:33.094 "uuid": "74d44f0e-41d6-47b9-9bfc-c7922a29dcdc", 00:22:33.094 "strip_size_kb": 0, 00:22:33.094 "state": "online", 00:22:33.094 "raid_level": "raid1", 00:22:33.094 "superblock": false, 00:22:33.094 "num_base_bdevs": 4, 00:22:33.094 "num_base_bdevs_discovered": 4, 00:22:33.094 "num_base_bdevs_operational": 4, 00:22:33.094 "base_bdevs_list": [ 00:22:33.094 { 00:22:33.094 "name": "NewBaseBdev", 00:22:33.094 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:33.094 "is_configured": true, 00:22:33.094 "data_offset": 0, 00:22:33.094 "data_size": 65536 00:22:33.094 }, 00:22:33.094 { 00:22:33.094 "name": "BaseBdev2", 00:22:33.094 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:33.094 "is_configured": true, 00:22:33.094 "data_offset": 0, 00:22:33.094 "data_size": 65536 00:22:33.094 }, 00:22:33.094 { 00:22:33.094 "name": "BaseBdev3", 00:22:33.094 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:33.094 "is_configured": true, 00:22:33.094 "data_offset": 0, 00:22:33.094 "data_size": 65536 00:22:33.094 }, 00:22:33.094 { 00:22:33.094 "name": "BaseBdev4", 00:22:33.094 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:33.094 "is_configured": true, 00:22:33.094 "data_offset": 0, 00:22:33.094 "data_size": 65536 00:22:33.094 } 00:22:33.094 ] 00:22:33.094 }' 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.094 15:16:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.660 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:33.660 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:33.660 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:33.660 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:33.660 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:33.660 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:33.660 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:33.660 15:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:33.660 [2024-07-23 15:16:28.983703] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:33.660 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:33.660 "name": "Existed_Raid", 00:22:33.660 "aliases": [ 00:22:33.660 "74d44f0e-41d6-47b9-9bfc-c7922a29dcdc" 00:22:33.660 ], 00:22:33.660 "product_name": "Raid Volume", 00:22:33.660 "block_size": 512, 00:22:33.660 "num_blocks": 65536, 00:22:33.660 "uuid": "74d44f0e-41d6-47b9-9bfc-c7922a29dcdc", 00:22:33.660 "assigned_rate_limits": { 00:22:33.660 "rw_ios_per_sec": 0, 00:22:33.660 "rw_mbytes_per_sec": 0, 00:22:33.660 "r_mbytes_per_sec": 0, 00:22:33.660 "w_mbytes_per_sec": 0 00:22:33.660 }, 00:22:33.660 "claimed": false, 00:22:33.660 "zoned": false, 00:22:33.660 "supported_io_types": { 00:22:33.660 "read": true, 00:22:33.660 "write": true, 00:22:33.660 "unmap": false, 00:22:33.660 "flush": false, 00:22:33.660 "reset": true, 00:22:33.660 "nvme_admin": false, 00:22:33.660 "nvme_io": false, 00:22:33.660 "nvme_io_md": false, 00:22:33.660 "write_zeroes": true, 00:22:33.660 "zcopy": false, 00:22:33.660 "get_zone_info": false, 00:22:33.660 "zone_management": false, 00:22:33.660 "zone_append": false, 00:22:33.660 "compare": false, 00:22:33.660 "compare_and_write": false, 00:22:33.660 "abort": false, 00:22:33.660 "seek_hole": false, 00:22:33.660 "seek_data": false, 00:22:33.660 "copy": false, 00:22:33.660 "nvme_iov_md": false 00:22:33.660 }, 00:22:33.660 "memory_domains": [ 00:22:33.660 { 00:22:33.660 "dma_device_id": "system", 00:22:33.660 "dma_device_type": 1 00:22:33.660 }, 00:22:33.660 { 00:22:33.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.660 "dma_device_type": 2 00:22:33.660 }, 00:22:33.660 { 00:22:33.660 "dma_device_id": "system", 00:22:33.660 "dma_device_type": 1 00:22:33.660 }, 00:22:33.660 { 00:22:33.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.660 "dma_device_type": 2 00:22:33.660 }, 00:22:33.660 { 00:22:33.660 "dma_device_id": "system", 00:22:33.660 "dma_device_type": 1 00:22:33.660 }, 00:22:33.660 { 00:22:33.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.660 "dma_device_type": 2 00:22:33.660 }, 00:22:33.660 { 00:22:33.660 "dma_device_id": "system", 00:22:33.660 "dma_device_type": 1 00:22:33.660 }, 00:22:33.660 { 00:22:33.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.660 "dma_device_type": 2 00:22:33.660 } 00:22:33.660 ], 00:22:33.660 "driver_specific": { 00:22:33.660 "raid": { 00:22:33.660 "uuid": "74d44f0e-41d6-47b9-9bfc-c7922a29dcdc", 00:22:33.660 "strip_size_kb": 0, 00:22:33.660 "state": "online", 00:22:33.660 "raid_level": "raid1", 00:22:33.660 "superblock": false, 00:22:33.660 "num_base_bdevs": 4, 00:22:33.660 "num_base_bdevs_discovered": 4, 00:22:33.660 "num_base_bdevs_operational": 4, 00:22:33.660 "base_bdevs_list": [ 00:22:33.660 { 00:22:33.660 "name": "NewBaseBdev", 00:22:33.660 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:33.660 "is_configured": true, 00:22:33.660 "data_offset": 0, 00:22:33.660 "data_size": 65536 00:22:33.660 }, 00:22:33.660 { 00:22:33.660 "name": "BaseBdev2", 00:22:33.660 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:33.660 "is_configured": true, 00:22:33.660 "data_offset": 0, 00:22:33.660 "data_size": 65536 00:22:33.660 }, 00:22:33.660 { 00:22:33.661 "name": "BaseBdev3", 00:22:33.661 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:33.661 "is_configured": true, 00:22:33.661 "data_offset": 0, 00:22:33.661 "data_size": 65536 00:22:33.661 }, 00:22:33.661 { 00:22:33.661 "name": "BaseBdev4", 00:22:33.661 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:33.661 "is_configured": true, 00:22:33.661 "data_offset": 0, 00:22:33.661 "data_size": 65536 00:22:33.661 } 00:22:33.661 ] 00:22:33.661 } 00:22:33.661 } 00:22:33.661 }' 00:22:33.661 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:33.661 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:33.661 BaseBdev2 00:22:33.661 BaseBdev3 00:22:33.661 BaseBdev4' 00:22:33.661 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:33.661 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:33.661 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:33.919 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:33.919 "name": "NewBaseBdev", 00:22:33.919 "aliases": [ 00:22:33.919 "2cf457bf-e004-421b-9872-728a60fb458b" 00:22:33.919 ], 00:22:33.919 "product_name": "Malloc disk", 00:22:33.919 "block_size": 512, 00:22:33.919 "num_blocks": 65536, 00:22:33.919 "uuid": "2cf457bf-e004-421b-9872-728a60fb458b", 00:22:33.919 "assigned_rate_limits": { 00:22:33.919 "rw_ios_per_sec": 0, 00:22:33.919 "rw_mbytes_per_sec": 0, 00:22:33.919 "r_mbytes_per_sec": 0, 00:22:33.919 "w_mbytes_per_sec": 0 00:22:33.919 }, 00:22:33.919 "claimed": true, 00:22:33.919 "claim_type": "exclusive_write", 00:22:33.919 "zoned": false, 00:22:33.919 "supported_io_types": { 00:22:33.919 "read": true, 00:22:33.919 "write": true, 00:22:33.919 "unmap": true, 00:22:33.919 "flush": true, 00:22:33.919 "reset": true, 00:22:33.919 "nvme_admin": false, 00:22:33.919 "nvme_io": false, 00:22:33.919 "nvme_io_md": false, 00:22:33.919 "write_zeroes": true, 00:22:33.919 "zcopy": true, 00:22:33.919 "get_zone_info": false, 00:22:33.919 "zone_management": false, 00:22:33.919 "zone_append": false, 00:22:33.919 "compare": false, 00:22:33.919 "compare_and_write": false, 00:22:33.919 "abort": true, 00:22:33.919 "seek_hole": false, 00:22:33.919 "seek_data": false, 00:22:33.919 "copy": true, 00:22:33.919 "nvme_iov_md": false 00:22:33.919 }, 00:22:33.919 "memory_domains": [ 00:22:33.919 { 00:22:33.919 "dma_device_id": "system", 00:22:33.919 "dma_device_type": 1 00:22:33.919 }, 00:22:33.919 { 00:22:33.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.919 "dma_device_type": 2 00:22:33.919 } 00:22:33.919 ], 00:22:33.919 "driver_specific": {} 00:22:33.919 }' 00:22:33.919 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:33.919 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:33.919 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:33.920 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:33.920 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:33.920 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:33.920 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:33.920 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.178 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.178 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.178 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.178 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.178 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.178 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.178 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:34.436 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.436 "name": "BaseBdev2", 00:22:34.436 "aliases": [ 00:22:34.436 "fe693153-0225-4081-a4e3-3e6892a8ce6e" 00:22:34.436 ], 00:22:34.436 "product_name": "Malloc disk", 00:22:34.436 "block_size": 512, 00:22:34.436 "num_blocks": 65536, 00:22:34.436 "uuid": "fe693153-0225-4081-a4e3-3e6892a8ce6e", 00:22:34.436 "assigned_rate_limits": { 00:22:34.436 "rw_ios_per_sec": 0, 00:22:34.436 "rw_mbytes_per_sec": 0, 00:22:34.436 "r_mbytes_per_sec": 0, 00:22:34.436 "w_mbytes_per_sec": 0 00:22:34.436 }, 00:22:34.436 "claimed": true, 00:22:34.436 "claim_type": "exclusive_write", 00:22:34.436 "zoned": false, 00:22:34.436 "supported_io_types": { 00:22:34.436 "read": true, 00:22:34.436 "write": true, 00:22:34.436 "unmap": true, 00:22:34.436 "flush": true, 00:22:34.436 "reset": true, 00:22:34.436 "nvme_admin": false, 00:22:34.436 "nvme_io": false, 00:22:34.436 "nvme_io_md": false, 00:22:34.436 "write_zeroes": true, 00:22:34.436 "zcopy": true, 00:22:34.436 "get_zone_info": false, 00:22:34.436 "zone_management": false, 00:22:34.436 "zone_append": false, 00:22:34.436 "compare": false, 00:22:34.436 "compare_and_write": false, 00:22:34.436 "abort": true, 00:22:34.436 "seek_hole": false, 00:22:34.436 "seek_data": false, 00:22:34.436 "copy": true, 00:22:34.436 "nvme_iov_md": false 00:22:34.436 }, 00:22:34.436 "memory_domains": [ 00:22:34.436 { 00:22:34.436 "dma_device_id": "system", 00:22:34.436 "dma_device_type": 1 00:22:34.436 }, 00:22:34.437 { 00:22:34.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.437 "dma_device_type": 2 00:22:34.437 } 00:22:34.437 ], 00:22:34.437 "driver_specific": {} 00:22:34.437 }' 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:34.437 15:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.695 "name": "BaseBdev3", 00:22:34.695 "aliases": [ 00:22:34.695 "613fe324-4f10-4bbc-be2a-d26c0d6d49bf" 00:22:34.695 ], 00:22:34.695 "product_name": "Malloc disk", 00:22:34.695 "block_size": 512, 00:22:34.695 "num_blocks": 65536, 00:22:34.695 "uuid": "613fe324-4f10-4bbc-be2a-d26c0d6d49bf", 00:22:34.695 "assigned_rate_limits": { 00:22:34.695 "rw_ios_per_sec": 0, 00:22:34.695 "rw_mbytes_per_sec": 0, 00:22:34.695 "r_mbytes_per_sec": 0, 00:22:34.695 "w_mbytes_per_sec": 0 00:22:34.695 }, 00:22:34.695 "claimed": true, 00:22:34.695 "claim_type": "exclusive_write", 00:22:34.695 "zoned": false, 00:22:34.695 "supported_io_types": { 00:22:34.695 "read": true, 00:22:34.695 "write": true, 00:22:34.695 "unmap": true, 00:22:34.695 "flush": true, 00:22:34.695 "reset": true, 00:22:34.695 "nvme_admin": false, 00:22:34.695 "nvme_io": false, 00:22:34.695 "nvme_io_md": false, 00:22:34.695 "write_zeroes": true, 00:22:34.695 "zcopy": true, 00:22:34.695 "get_zone_info": false, 00:22:34.695 "zone_management": false, 00:22:34.695 "zone_append": false, 00:22:34.695 "compare": false, 00:22:34.695 "compare_and_write": false, 00:22:34.695 "abort": true, 00:22:34.695 "seek_hole": false, 00:22:34.695 "seek_data": false, 00:22:34.695 "copy": true, 00:22:34.695 "nvme_iov_md": false 00:22:34.695 }, 00:22:34.695 "memory_domains": [ 00:22:34.695 { 00:22:34.695 "dma_device_id": "system", 00:22:34.695 "dma_device_type": 1 00:22:34.695 }, 00:22:34.695 { 00:22:34.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.695 "dma_device_type": 2 00:22:34.695 } 00:22:34.695 ], 00:22:34.695 "driver_specific": {} 00:22:34.695 }' 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:34.695 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.953 "name": "BaseBdev4", 00:22:34.953 "aliases": [ 00:22:34.953 "968c0e20-1c07-485a-a1cb-6a6cd483bd29" 00:22:34.953 ], 00:22:34.953 "product_name": "Malloc disk", 00:22:34.953 "block_size": 512, 00:22:34.953 "num_blocks": 65536, 00:22:34.953 "uuid": "968c0e20-1c07-485a-a1cb-6a6cd483bd29", 00:22:34.953 "assigned_rate_limits": { 00:22:34.953 "rw_ios_per_sec": 0, 00:22:34.953 "rw_mbytes_per_sec": 0, 00:22:34.953 "r_mbytes_per_sec": 0, 00:22:34.953 "w_mbytes_per_sec": 0 00:22:34.953 }, 00:22:34.953 "claimed": true, 00:22:34.953 "claim_type": "exclusive_write", 00:22:34.953 "zoned": false, 00:22:34.953 "supported_io_types": { 00:22:34.953 "read": true, 00:22:34.953 "write": true, 00:22:34.953 "unmap": true, 00:22:34.953 "flush": true, 00:22:34.953 "reset": true, 00:22:34.953 "nvme_admin": false, 00:22:34.953 "nvme_io": false, 00:22:34.953 "nvme_io_md": false, 00:22:34.953 "write_zeroes": true, 00:22:34.953 "zcopy": true, 00:22:34.953 "get_zone_info": false, 00:22:34.953 "zone_management": false, 00:22:34.953 "zone_append": false, 00:22:34.953 "compare": false, 00:22:34.953 "compare_and_write": false, 00:22:34.953 "abort": true, 00:22:34.953 "seek_hole": false, 00:22:34.953 "seek_data": false, 00:22:34.953 "copy": true, 00:22:34.953 "nvme_iov_md": false 00:22:34.953 }, 00:22:34.953 "memory_domains": [ 00:22:34.953 { 00:22:34.953 "dma_device_id": "system", 00:22:34.953 "dma_device_type": 1 00:22:34.953 }, 00:22:34.953 { 00:22:34.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.953 "dma_device_type": 2 00:22:34.953 } 00:22:34.953 ], 00:22:34.953 "driver_specific": {} 00:22:34.953 }' 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.953 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.212 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.212 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.212 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:35.212 [2024-07-23 15:16:30.619728] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:35.212 [2024-07-23 15:16:30.619767] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.212 [2024-07-23 15:16:30.619879] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.212 [2024-07-23 15:16:30.620152] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.212 [2024-07-23 15:16:30.620170] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name Existed_Raid, state offline 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 104348 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 104348 ']' 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 104348 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104348 00:22:35.470 killing process with pid 104348 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104348' 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 104348 00:22:35.470 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 104348 00:22:35.470 [2024-07-23 15:16:30.687933] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:35.470 [2024-07-23 15:16:30.734767] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.728 15:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:35.728 00:22:35.728 real 0m23.684s 00:22:35.728 user 0m41.397s 00:22:35.728 sys 0m5.161s 00:22:35.728 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:35.728 15:16:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.728 ************************************ 00:22:35.728 END TEST raid_state_function_test 00:22:35.728 ************************************ 00:22:35.728 15:16:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:35.728 15:16:31 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:22:35.728 15:16:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:35.728 15:16:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.728 15:16:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.728 ************************************ 00:22:35.728 START TEST raid_state_function_test_sb 00:22:35.728 ************************************ 00:22:35.728 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:22:35.728 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:35.728 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:35.728 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:35.729 Process raid pid: 105303 00:22:35.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=105303 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 105303' 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 105303 /var/tmp/spdk-raid.sock 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 105303 ']' 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.729 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.729 [2024-07-23 15:16:31.123901] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:22:35.729 [2024-07-23 15:16:31.124087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.987 [2024-07-23 15:16:31.276647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.987 [2024-07-23 15:16:31.323104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.987 [2024-07-23 15:16:31.367400] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.560 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.560 15:16:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:22:36.560 15:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:36.827 [2024-07-23 15:16:32.177040] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.827 [2024-07-23 15:16:32.177099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.827 [2024-07-23 15:16:32.177111] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:36.827 [2024-07-23 15:16:32.177124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:36.827 [2024-07-23 15:16:32.177136] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:36.827 [2024-07-23 15:16:32.177149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:36.827 [2024-07-23 15:16:32.177156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:36.827 [2024-07-23 15:16:32.177173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.827 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.085 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.085 "name": "Existed_Raid", 00:22:37.085 "uuid": "c8fa84cb-eb1d-4b03-b366-6dff2babeb45", 00:22:37.085 "strip_size_kb": 0, 00:22:37.085 "state": "configuring", 00:22:37.085 "raid_level": "raid1", 00:22:37.085 "superblock": true, 00:22:37.085 "num_base_bdevs": 4, 00:22:37.085 "num_base_bdevs_discovered": 0, 00:22:37.085 "num_base_bdevs_operational": 4, 00:22:37.085 "base_bdevs_list": [ 00:22:37.085 { 00:22:37.085 "name": "BaseBdev1", 00:22:37.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.085 "is_configured": false, 00:22:37.085 "data_offset": 0, 00:22:37.085 "data_size": 0 00:22:37.085 }, 00:22:37.085 { 00:22:37.085 "name": "BaseBdev2", 00:22:37.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.085 "is_configured": false, 00:22:37.085 "data_offset": 0, 00:22:37.085 "data_size": 0 00:22:37.085 }, 00:22:37.085 { 00:22:37.085 "name": "BaseBdev3", 00:22:37.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.085 "is_configured": false, 00:22:37.085 "data_offset": 0, 00:22:37.085 "data_size": 0 00:22:37.085 }, 00:22:37.085 { 00:22:37.085 "name": "BaseBdev4", 00:22:37.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.085 "is_configured": false, 00:22:37.085 "data_offset": 0, 00:22:37.085 "data_size": 0 00:22:37.085 } 00:22:37.085 ] 00:22:37.085 }' 00:22:37.085 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.085 15:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.343 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:37.601 [2024-07-23 15:16:32.853060] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:37.601 [2024-07-23 15:16:32.853117] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:22:37.601 15:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:37.859 [2024-07-23 15:16:33.113152] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:37.859 [2024-07-23 15:16:33.113221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:37.859 [2024-07-23 15:16:33.113234] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:37.859 [2024-07-23 15:16:33.113247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:37.859 [2024-07-23 15:16:33.113255] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:37.859 [2024-07-23 15:16:33.113267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:37.859 [2024-07-23 15:16:33.113275] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:37.859 [2024-07-23 15:16:33.113287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:37.859 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:38.116 [2024-07-23 15:16:33.378682] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:38.116 BaseBdev1 00:22:38.116 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:38.116 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:38.116 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:38.116 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:38.116 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:38.116 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:38.116 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:38.374 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:38.374 [ 00:22:38.374 { 00:22:38.374 "name": "BaseBdev1", 00:22:38.374 "aliases": [ 00:22:38.374 "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8" 00:22:38.374 ], 00:22:38.374 "product_name": "Malloc disk", 00:22:38.374 "block_size": 512, 00:22:38.374 "num_blocks": 65536, 00:22:38.374 "uuid": "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8", 00:22:38.374 "assigned_rate_limits": { 00:22:38.374 "rw_ios_per_sec": 0, 00:22:38.374 "rw_mbytes_per_sec": 0, 00:22:38.374 "r_mbytes_per_sec": 0, 00:22:38.374 "w_mbytes_per_sec": 0 00:22:38.374 }, 00:22:38.374 "claimed": true, 00:22:38.374 "claim_type": "exclusive_write", 00:22:38.374 "zoned": false, 00:22:38.374 "supported_io_types": { 00:22:38.374 "read": true, 00:22:38.374 "write": true, 00:22:38.374 "unmap": true, 00:22:38.374 "flush": true, 00:22:38.374 "reset": true, 00:22:38.374 "nvme_admin": false, 00:22:38.375 "nvme_io": false, 00:22:38.375 "nvme_io_md": false, 00:22:38.375 "write_zeroes": true, 00:22:38.375 "zcopy": true, 00:22:38.375 "get_zone_info": false, 00:22:38.375 "zone_management": false, 00:22:38.375 "zone_append": false, 00:22:38.375 "compare": false, 00:22:38.375 "compare_and_write": false, 00:22:38.375 "abort": true, 00:22:38.375 "seek_hole": false, 00:22:38.375 "seek_data": false, 00:22:38.375 "copy": true, 00:22:38.375 "nvme_iov_md": false 00:22:38.375 }, 00:22:38.375 "memory_domains": [ 00:22:38.375 { 00:22:38.375 "dma_device_id": "system", 00:22:38.375 "dma_device_type": 1 00:22:38.375 }, 00:22:38.375 { 00:22:38.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.375 "dma_device_type": 2 00:22:38.375 } 00:22:38.375 ], 00:22:38.375 "driver_specific": {} 00:22:38.375 } 00:22:38.375 ] 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.375 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.633 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.633 "name": "Existed_Raid", 00:22:38.633 "uuid": "de96140e-0d5f-42bc-b1ac-843ef04e5897", 00:22:38.633 "strip_size_kb": 0, 00:22:38.633 "state": "configuring", 00:22:38.633 "raid_level": "raid1", 00:22:38.633 "superblock": true, 00:22:38.633 "num_base_bdevs": 4, 00:22:38.633 "num_base_bdevs_discovered": 1, 00:22:38.633 "num_base_bdevs_operational": 4, 00:22:38.634 "base_bdevs_list": [ 00:22:38.634 { 00:22:38.634 "name": "BaseBdev1", 00:22:38.634 "uuid": "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8", 00:22:38.634 "is_configured": true, 00:22:38.634 "data_offset": 2048, 00:22:38.634 "data_size": 63488 00:22:38.634 }, 00:22:38.634 { 00:22:38.634 "name": "BaseBdev2", 00:22:38.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.634 "is_configured": false, 00:22:38.634 "data_offset": 0, 00:22:38.634 "data_size": 0 00:22:38.634 }, 00:22:38.634 { 00:22:38.634 "name": "BaseBdev3", 00:22:38.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.634 "is_configured": false, 00:22:38.634 "data_offset": 0, 00:22:38.634 "data_size": 0 00:22:38.634 }, 00:22:38.634 { 00:22:38.634 "name": "BaseBdev4", 00:22:38.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.634 "is_configured": false, 00:22:38.634 "data_offset": 0, 00:22:38.634 "data_size": 0 00:22:38.634 } 00:22:38.634 ] 00:22:38.634 }' 00:22:38.634 15:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.634 15:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.892 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:39.150 [2024-07-23 15:16:34.399001] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:39.150 [2024-07-23 15:16:34.399075] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:22:39.150 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:39.408 [2024-07-23 15:16:34.583113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:39.408 [2024-07-23 15:16:34.585369] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:39.408 [2024-07-23 15:16:34.585429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:39.408 [2024-07-23 15:16:34.585439] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:39.408 [2024-07-23 15:16:34.585456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:39.408 [2024-07-23 15:16:34.585464] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:39.408 [2024-07-23 15:16:34.585477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.408 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.665 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.665 "name": "Existed_Raid", 00:22:39.665 "uuid": "c1897347-8bc6-466c-8593-591abdf5af87", 00:22:39.665 "strip_size_kb": 0, 00:22:39.665 "state": "configuring", 00:22:39.665 "raid_level": "raid1", 00:22:39.665 "superblock": true, 00:22:39.665 "num_base_bdevs": 4, 00:22:39.665 "num_base_bdevs_discovered": 1, 00:22:39.665 "num_base_bdevs_operational": 4, 00:22:39.665 "base_bdevs_list": [ 00:22:39.665 { 00:22:39.665 "name": "BaseBdev1", 00:22:39.665 "uuid": "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8", 00:22:39.665 "is_configured": true, 00:22:39.665 "data_offset": 2048, 00:22:39.665 "data_size": 63488 00:22:39.665 }, 00:22:39.665 { 00:22:39.665 "name": "BaseBdev2", 00:22:39.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.665 "is_configured": false, 00:22:39.665 "data_offset": 0, 00:22:39.665 "data_size": 0 00:22:39.665 }, 00:22:39.665 { 00:22:39.665 "name": "BaseBdev3", 00:22:39.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.665 "is_configured": false, 00:22:39.665 "data_offset": 0, 00:22:39.665 "data_size": 0 00:22:39.665 }, 00:22:39.665 { 00:22:39.665 "name": "BaseBdev4", 00:22:39.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.665 "is_configured": false, 00:22:39.665 "data_offset": 0, 00:22:39.665 "data_size": 0 00:22:39.665 } 00:22:39.665 ] 00:22:39.666 }' 00:22:39.666 15:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.666 15:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.924 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:39.924 [2024-07-23 15:16:35.297459] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.924 BaseBdev2 00:22:39.924 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:39.924 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:39.924 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:39.924 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:39.924 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:39.924 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:39.924 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:40.181 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:40.439 [ 00:22:40.439 { 00:22:40.439 "name": "BaseBdev2", 00:22:40.439 "aliases": [ 00:22:40.439 "36eca03a-991e-48ae-bfb0-c10fe0a3becc" 00:22:40.439 ], 00:22:40.439 "product_name": "Malloc disk", 00:22:40.439 "block_size": 512, 00:22:40.439 "num_blocks": 65536, 00:22:40.439 "uuid": "36eca03a-991e-48ae-bfb0-c10fe0a3becc", 00:22:40.439 "assigned_rate_limits": { 00:22:40.439 "rw_ios_per_sec": 0, 00:22:40.439 "rw_mbytes_per_sec": 0, 00:22:40.439 "r_mbytes_per_sec": 0, 00:22:40.439 "w_mbytes_per_sec": 0 00:22:40.439 }, 00:22:40.439 "claimed": true, 00:22:40.439 "claim_type": "exclusive_write", 00:22:40.439 "zoned": false, 00:22:40.439 "supported_io_types": { 00:22:40.439 "read": true, 00:22:40.439 "write": true, 00:22:40.439 "unmap": true, 00:22:40.439 "flush": true, 00:22:40.439 "reset": true, 00:22:40.439 "nvme_admin": false, 00:22:40.439 "nvme_io": false, 00:22:40.439 "nvme_io_md": false, 00:22:40.439 "write_zeroes": true, 00:22:40.439 "zcopy": true, 00:22:40.439 "get_zone_info": false, 00:22:40.439 "zone_management": false, 00:22:40.439 "zone_append": false, 00:22:40.439 "compare": false, 00:22:40.439 "compare_and_write": false, 00:22:40.439 "abort": true, 00:22:40.439 "seek_hole": false, 00:22:40.439 "seek_data": false, 00:22:40.439 "copy": true, 00:22:40.439 "nvme_iov_md": false 00:22:40.439 }, 00:22:40.439 "memory_domains": [ 00:22:40.439 { 00:22:40.439 "dma_device_id": "system", 00:22:40.439 "dma_device_type": 1 00:22:40.439 }, 00:22:40.439 { 00:22:40.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.439 "dma_device_type": 2 00:22:40.439 } 00:22:40.439 ], 00:22:40.439 "driver_specific": {} 00:22:40.439 } 00:22:40.439 ] 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.439 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.439 "name": "Existed_Raid", 00:22:40.439 "uuid": "c1897347-8bc6-466c-8593-591abdf5af87", 00:22:40.439 "strip_size_kb": 0, 00:22:40.439 "state": "configuring", 00:22:40.439 "raid_level": "raid1", 00:22:40.439 "superblock": true, 00:22:40.439 "num_base_bdevs": 4, 00:22:40.439 "num_base_bdevs_discovered": 2, 00:22:40.439 "num_base_bdevs_operational": 4, 00:22:40.439 "base_bdevs_list": [ 00:22:40.439 { 00:22:40.439 "name": "BaseBdev1", 00:22:40.439 "uuid": "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8", 00:22:40.439 "is_configured": true, 00:22:40.439 "data_offset": 2048, 00:22:40.439 "data_size": 63488 00:22:40.439 }, 00:22:40.439 { 00:22:40.440 "name": "BaseBdev2", 00:22:40.440 "uuid": "36eca03a-991e-48ae-bfb0-c10fe0a3becc", 00:22:40.440 "is_configured": true, 00:22:40.440 "data_offset": 2048, 00:22:40.440 "data_size": 63488 00:22:40.440 }, 00:22:40.440 { 00:22:40.440 "name": "BaseBdev3", 00:22:40.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.440 "is_configured": false, 00:22:40.440 "data_offset": 0, 00:22:40.440 "data_size": 0 00:22:40.440 }, 00:22:40.440 { 00:22:40.440 "name": "BaseBdev4", 00:22:40.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.440 "is_configured": false, 00:22:40.440 "data_offset": 0, 00:22:40.440 "data_size": 0 00:22:40.440 } 00:22:40.440 ] 00:22:40.440 }' 00:22:40.440 15:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.440 15:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.698 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:40.956 [2024-07-23 15:16:36.293018] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.956 BaseBdev3 00:22:40.956 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:40.956 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:40.956 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:40.956 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:40.956 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:40.956 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:40.956 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:41.214 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:41.472 [ 00:22:41.472 { 00:22:41.472 "name": "BaseBdev3", 00:22:41.472 "aliases": [ 00:22:41.472 "02fe5d1c-9857-42e2-9803-15f452fe5dde" 00:22:41.472 ], 00:22:41.472 "product_name": "Malloc disk", 00:22:41.472 "block_size": 512, 00:22:41.472 "num_blocks": 65536, 00:22:41.472 "uuid": "02fe5d1c-9857-42e2-9803-15f452fe5dde", 00:22:41.472 "assigned_rate_limits": { 00:22:41.472 "rw_ios_per_sec": 0, 00:22:41.472 "rw_mbytes_per_sec": 0, 00:22:41.472 "r_mbytes_per_sec": 0, 00:22:41.472 "w_mbytes_per_sec": 0 00:22:41.472 }, 00:22:41.472 "claimed": true, 00:22:41.472 "claim_type": "exclusive_write", 00:22:41.472 "zoned": false, 00:22:41.472 "supported_io_types": { 00:22:41.472 "read": true, 00:22:41.472 "write": true, 00:22:41.472 "unmap": true, 00:22:41.472 "flush": true, 00:22:41.472 "reset": true, 00:22:41.472 "nvme_admin": false, 00:22:41.472 "nvme_io": false, 00:22:41.472 "nvme_io_md": false, 00:22:41.472 "write_zeroes": true, 00:22:41.472 "zcopy": true, 00:22:41.472 "get_zone_info": false, 00:22:41.472 "zone_management": false, 00:22:41.472 "zone_append": false, 00:22:41.472 "compare": false, 00:22:41.472 "compare_and_write": false, 00:22:41.472 "abort": true, 00:22:41.472 "seek_hole": false, 00:22:41.472 "seek_data": false, 00:22:41.472 "copy": true, 00:22:41.472 "nvme_iov_md": false 00:22:41.472 }, 00:22:41.472 "memory_domains": [ 00:22:41.472 { 00:22:41.472 "dma_device_id": "system", 00:22:41.472 "dma_device_type": 1 00:22:41.472 }, 00:22:41.472 { 00:22:41.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.472 "dma_device_type": 2 00:22:41.472 } 00:22:41.472 ], 00:22:41.472 "driver_specific": {} 00:22:41.472 } 00:22:41.472 ] 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.472 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.731 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.731 "name": "Existed_Raid", 00:22:41.731 "uuid": "c1897347-8bc6-466c-8593-591abdf5af87", 00:22:41.731 "strip_size_kb": 0, 00:22:41.731 "state": "configuring", 00:22:41.731 "raid_level": "raid1", 00:22:41.731 "superblock": true, 00:22:41.731 "num_base_bdevs": 4, 00:22:41.731 "num_base_bdevs_discovered": 3, 00:22:41.731 "num_base_bdevs_operational": 4, 00:22:41.731 "base_bdevs_list": [ 00:22:41.731 { 00:22:41.731 "name": "BaseBdev1", 00:22:41.731 "uuid": "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8", 00:22:41.731 "is_configured": true, 00:22:41.731 "data_offset": 2048, 00:22:41.731 "data_size": 63488 00:22:41.731 }, 00:22:41.731 { 00:22:41.731 "name": "BaseBdev2", 00:22:41.731 "uuid": "36eca03a-991e-48ae-bfb0-c10fe0a3becc", 00:22:41.731 "is_configured": true, 00:22:41.731 "data_offset": 2048, 00:22:41.731 "data_size": 63488 00:22:41.731 }, 00:22:41.731 { 00:22:41.731 "name": "BaseBdev3", 00:22:41.731 "uuid": "02fe5d1c-9857-42e2-9803-15f452fe5dde", 00:22:41.731 "is_configured": true, 00:22:41.731 "data_offset": 2048, 00:22:41.731 "data_size": 63488 00:22:41.731 }, 00:22:41.731 { 00:22:41.731 "name": "BaseBdev4", 00:22:41.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.731 "is_configured": false, 00:22:41.731 "data_offset": 0, 00:22:41.731 "data_size": 0 00:22:41.731 } 00:22:41.731 ] 00:22:41.731 }' 00:22:41.731 15:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.731 15:16:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.989 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:42.247 [2024-07-23 15:16:37.520668] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:42.247 [2024-07-23 15:16:37.520906] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:22:42.247 [2024-07-23 15:16:37.520929] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:42.247 [2024-07-23 15:16:37.521048] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:22:42.247 [2024-07-23 15:16:37.521416] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:22:42.247 [2024-07-23 15:16:37.521451] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:22:42.247 [2024-07-23 15:16:37.521578] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.247 BaseBdev4 00:22:42.247 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:42.247 15:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:42.247 15:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:42.247 15:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:42.247 15:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:42.247 15:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:42.247 15:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:42.505 [ 00:22:42.505 { 00:22:42.505 "name": "BaseBdev4", 00:22:42.505 "aliases": [ 00:22:42.505 "1010648f-44c4-4d57-87f8-705b16a42f94" 00:22:42.505 ], 00:22:42.505 "product_name": "Malloc disk", 00:22:42.505 "block_size": 512, 00:22:42.505 "num_blocks": 65536, 00:22:42.505 "uuid": "1010648f-44c4-4d57-87f8-705b16a42f94", 00:22:42.505 "assigned_rate_limits": { 00:22:42.505 "rw_ios_per_sec": 0, 00:22:42.505 "rw_mbytes_per_sec": 0, 00:22:42.505 "r_mbytes_per_sec": 0, 00:22:42.505 "w_mbytes_per_sec": 0 00:22:42.505 }, 00:22:42.505 "claimed": true, 00:22:42.505 "claim_type": "exclusive_write", 00:22:42.505 "zoned": false, 00:22:42.505 "supported_io_types": { 00:22:42.505 "read": true, 00:22:42.505 "write": true, 00:22:42.505 "unmap": true, 00:22:42.505 "flush": true, 00:22:42.505 "reset": true, 00:22:42.505 "nvme_admin": false, 00:22:42.505 "nvme_io": false, 00:22:42.505 "nvme_io_md": false, 00:22:42.505 "write_zeroes": true, 00:22:42.505 "zcopy": true, 00:22:42.505 "get_zone_info": false, 00:22:42.505 "zone_management": false, 00:22:42.505 "zone_append": false, 00:22:42.505 "compare": false, 00:22:42.505 "compare_and_write": false, 00:22:42.505 "abort": true, 00:22:42.505 "seek_hole": false, 00:22:42.505 "seek_data": false, 00:22:42.505 "copy": true, 00:22:42.505 "nvme_iov_md": false 00:22:42.505 }, 00:22:42.505 "memory_domains": [ 00:22:42.505 { 00:22:42.505 "dma_device_id": "system", 00:22:42.505 "dma_device_type": 1 00:22:42.505 }, 00:22:42.505 { 00:22:42.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.505 "dma_device_type": 2 00:22:42.505 } 00:22:42.505 ], 00:22:42.505 "driver_specific": {} 00:22:42.505 } 00:22:42.505 ] 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.505 15:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.763 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.763 "name": "Existed_Raid", 00:22:42.763 "uuid": "c1897347-8bc6-466c-8593-591abdf5af87", 00:22:42.763 "strip_size_kb": 0, 00:22:42.763 "state": "online", 00:22:42.763 "raid_level": "raid1", 00:22:42.763 "superblock": true, 00:22:42.763 "num_base_bdevs": 4, 00:22:42.763 "num_base_bdevs_discovered": 4, 00:22:42.763 "num_base_bdevs_operational": 4, 00:22:42.763 "base_bdevs_list": [ 00:22:42.763 { 00:22:42.763 "name": "BaseBdev1", 00:22:42.763 "uuid": "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8", 00:22:42.763 "is_configured": true, 00:22:42.763 "data_offset": 2048, 00:22:42.763 "data_size": 63488 00:22:42.763 }, 00:22:42.763 { 00:22:42.763 "name": "BaseBdev2", 00:22:42.763 "uuid": "36eca03a-991e-48ae-bfb0-c10fe0a3becc", 00:22:42.763 "is_configured": true, 00:22:42.763 "data_offset": 2048, 00:22:42.763 "data_size": 63488 00:22:42.763 }, 00:22:42.763 { 00:22:42.763 "name": "BaseBdev3", 00:22:42.763 "uuid": "02fe5d1c-9857-42e2-9803-15f452fe5dde", 00:22:42.763 "is_configured": true, 00:22:42.763 "data_offset": 2048, 00:22:42.763 "data_size": 63488 00:22:42.763 }, 00:22:42.763 { 00:22:42.763 "name": "BaseBdev4", 00:22:42.763 "uuid": "1010648f-44c4-4d57-87f8-705b16a42f94", 00:22:42.763 "is_configured": true, 00:22:42.763 "data_offset": 2048, 00:22:42.763 "data_size": 63488 00:22:42.763 } 00:22:42.763 ] 00:22:42.763 }' 00:22:42.763 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.763 15:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.021 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:43.021 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:43.021 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:43.021 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:43.021 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:43.021 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:43.021 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:43.021 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:43.279 [2024-07-23 15:16:38.569389] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.279 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:43.279 "name": "Existed_Raid", 00:22:43.279 "aliases": [ 00:22:43.279 "c1897347-8bc6-466c-8593-591abdf5af87" 00:22:43.279 ], 00:22:43.279 "product_name": "Raid Volume", 00:22:43.279 "block_size": 512, 00:22:43.279 "num_blocks": 63488, 00:22:43.279 "uuid": "c1897347-8bc6-466c-8593-591abdf5af87", 00:22:43.279 "assigned_rate_limits": { 00:22:43.279 "rw_ios_per_sec": 0, 00:22:43.279 "rw_mbytes_per_sec": 0, 00:22:43.279 "r_mbytes_per_sec": 0, 00:22:43.279 "w_mbytes_per_sec": 0 00:22:43.279 }, 00:22:43.279 "claimed": false, 00:22:43.279 "zoned": false, 00:22:43.279 "supported_io_types": { 00:22:43.279 "read": true, 00:22:43.279 "write": true, 00:22:43.279 "unmap": false, 00:22:43.279 "flush": false, 00:22:43.279 "reset": true, 00:22:43.279 "nvme_admin": false, 00:22:43.279 "nvme_io": false, 00:22:43.279 "nvme_io_md": false, 00:22:43.279 "write_zeroes": true, 00:22:43.279 "zcopy": false, 00:22:43.279 "get_zone_info": false, 00:22:43.279 "zone_management": false, 00:22:43.279 "zone_append": false, 00:22:43.279 "compare": false, 00:22:43.279 "compare_and_write": false, 00:22:43.279 "abort": false, 00:22:43.279 "seek_hole": false, 00:22:43.279 "seek_data": false, 00:22:43.279 "copy": false, 00:22:43.279 "nvme_iov_md": false 00:22:43.279 }, 00:22:43.279 "memory_domains": [ 00:22:43.279 { 00:22:43.279 "dma_device_id": "system", 00:22:43.279 "dma_device_type": 1 00:22:43.279 }, 00:22:43.279 { 00:22:43.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.280 "dma_device_type": 2 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "dma_device_id": "system", 00:22:43.280 "dma_device_type": 1 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.280 "dma_device_type": 2 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "dma_device_id": "system", 00:22:43.280 "dma_device_type": 1 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.280 "dma_device_type": 2 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "dma_device_id": "system", 00:22:43.280 "dma_device_type": 1 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.280 "dma_device_type": 2 00:22:43.280 } 00:22:43.280 ], 00:22:43.280 "driver_specific": { 00:22:43.280 "raid": { 00:22:43.280 "uuid": "c1897347-8bc6-466c-8593-591abdf5af87", 00:22:43.280 "strip_size_kb": 0, 00:22:43.280 "state": "online", 00:22:43.280 "raid_level": "raid1", 00:22:43.280 "superblock": true, 00:22:43.280 "num_base_bdevs": 4, 00:22:43.280 "num_base_bdevs_discovered": 4, 00:22:43.280 "num_base_bdevs_operational": 4, 00:22:43.280 "base_bdevs_list": [ 00:22:43.280 { 00:22:43.280 "name": "BaseBdev1", 00:22:43.280 "uuid": "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8", 00:22:43.280 "is_configured": true, 00:22:43.280 "data_offset": 2048, 00:22:43.280 "data_size": 63488 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "name": "BaseBdev2", 00:22:43.280 "uuid": "36eca03a-991e-48ae-bfb0-c10fe0a3becc", 00:22:43.280 "is_configured": true, 00:22:43.280 "data_offset": 2048, 00:22:43.280 "data_size": 63488 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "name": "BaseBdev3", 00:22:43.280 "uuid": "02fe5d1c-9857-42e2-9803-15f452fe5dde", 00:22:43.280 "is_configured": true, 00:22:43.280 "data_offset": 2048, 00:22:43.280 "data_size": 63488 00:22:43.280 }, 00:22:43.280 { 00:22:43.280 "name": "BaseBdev4", 00:22:43.280 "uuid": "1010648f-44c4-4d57-87f8-705b16a42f94", 00:22:43.280 "is_configured": true, 00:22:43.280 "data_offset": 2048, 00:22:43.280 "data_size": 63488 00:22:43.280 } 00:22:43.280 ] 00:22:43.280 } 00:22:43.280 } 00:22:43.280 }' 00:22:43.280 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:43.280 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:43.280 BaseBdev2 00:22:43.280 BaseBdev3 00:22:43.280 BaseBdev4' 00:22:43.280 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:43.280 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:43.280 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:43.538 "name": "BaseBdev1", 00:22:43.538 "aliases": [ 00:22:43.538 "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8" 00:22:43.538 ], 00:22:43.538 "product_name": "Malloc disk", 00:22:43.538 "block_size": 512, 00:22:43.538 "num_blocks": 65536, 00:22:43.538 "uuid": "8d7c24ca-bbb3-45bb-bbac-0d153474d4a8", 00:22:43.538 "assigned_rate_limits": { 00:22:43.538 "rw_ios_per_sec": 0, 00:22:43.538 "rw_mbytes_per_sec": 0, 00:22:43.538 "r_mbytes_per_sec": 0, 00:22:43.538 "w_mbytes_per_sec": 0 00:22:43.538 }, 00:22:43.538 "claimed": true, 00:22:43.538 "claim_type": "exclusive_write", 00:22:43.538 "zoned": false, 00:22:43.538 "supported_io_types": { 00:22:43.538 "read": true, 00:22:43.538 "write": true, 00:22:43.538 "unmap": true, 00:22:43.538 "flush": true, 00:22:43.538 "reset": true, 00:22:43.538 "nvme_admin": false, 00:22:43.538 "nvme_io": false, 00:22:43.538 "nvme_io_md": false, 00:22:43.538 "write_zeroes": true, 00:22:43.538 "zcopy": true, 00:22:43.538 "get_zone_info": false, 00:22:43.538 "zone_management": false, 00:22:43.538 "zone_append": false, 00:22:43.538 "compare": false, 00:22:43.538 "compare_and_write": false, 00:22:43.538 "abort": true, 00:22:43.538 "seek_hole": false, 00:22:43.538 "seek_data": false, 00:22:43.538 "copy": true, 00:22:43.538 "nvme_iov_md": false 00:22:43.538 }, 00:22:43.538 "memory_domains": [ 00:22:43.538 { 00:22:43.538 "dma_device_id": "system", 00:22:43.538 "dma_device_type": 1 00:22:43.538 }, 00:22:43.538 { 00:22:43.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.538 "dma_device_type": 2 00:22:43.538 } 00:22:43.538 ], 00:22:43.538 "driver_specific": {} 00:22:43.538 }' 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:43.538 15:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:43.796 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:43.796 "name": "BaseBdev2", 00:22:43.796 "aliases": [ 00:22:43.796 "36eca03a-991e-48ae-bfb0-c10fe0a3becc" 00:22:43.796 ], 00:22:43.796 "product_name": "Malloc disk", 00:22:43.796 "block_size": 512, 00:22:43.796 "num_blocks": 65536, 00:22:43.796 "uuid": "36eca03a-991e-48ae-bfb0-c10fe0a3becc", 00:22:43.796 "assigned_rate_limits": { 00:22:43.796 "rw_ios_per_sec": 0, 00:22:43.796 "rw_mbytes_per_sec": 0, 00:22:43.796 "r_mbytes_per_sec": 0, 00:22:43.796 "w_mbytes_per_sec": 0 00:22:43.797 }, 00:22:43.797 "claimed": true, 00:22:43.797 "claim_type": "exclusive_write", 00:22:43.797 "zoned": false, 00:22:43.797 "supported_io_types": { 00:22:43.797 "read": true, 00:22:43.797 "write": true, 00:22:43.797 "unmap": true, 00:22:43.797 "flush": true, 00:22:43.797 "reset": true, 00:22:43.797 "nvme_admin": false, 00:22:43.797 "nvme_io": false, 00:22:43.797 "nvme_io_md": false, 00:22:43.797 "write_zeroes": true, 00:22:43.797 "zcopy": true, 00:22:43.797 "get_zone_info": false, 00:22:43.797 "zone_management": false, 00:22:43.797 "zone_append": false, 00:22:43.797 "compare": false, 00:22:43.797 "compare_and_write": false, 00:22:43.797 "abort": true, 00:22:43.797 "seek_hole": false, 00:22:43.797 "seek_data": false, 00:22:43.797 "copy": true, 00:22:43.797 "nvme_iov_md": false 00:22:43.797 }, 00:22:43.797 "memory_domains": [ 00:22:43.797 { 00:22:43.797 "dma_device_id": "system", 00:22:43.797 "dma_device_type": 1 00:22:43.797 }, 00:22:43.797 { 00:22:43.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.797 "dma_device_type": 2 00:22:43.797 } 00:22:43.797 ], 00:22:43.797 "driver_specific": {} 00:22:43.797 }' 00:22:43.797 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:43.797 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.054 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:44.054 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.054 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.054 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:44.054 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.054 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.055 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:44.055 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.055 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.055 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:44.055 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:44.055 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:44.055 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:44.313 "name": "BaseBdev3", 00:22:44.313 "aliases": [ 00:22:44.313 "02fe5d1c-9857-42e2-9803-15f452fe5dde" 00:22:44.313 ], 00:22:44.313 "product_name": "Malloc disk", 00:22:44.313 "block_size": 512, 00:22:44.313 "num_blocks": 65536, 00:22:44.313 "uuid": "02fe5d1c-9857-42e2-9803-15f452fe5dde", 00:22:44.313 "assigned_rate_limits": { 00:22:44.313 "rw_ios_per_sec": 0, 00:22:44.313 "rw_mbytes_per_sec": 0, 00:22:44.313 "r_mbytes_per_sec": 0, 00:22:44.313 "w_mbytes_per_sec": 0 00:22:44.313 }, 00:22:44.313 "claimed": true, 00:22:44.313 "claim_type": "exclusive_write", 00:22:44.313 "zoned": false, 00:22:44.313 "supported_io_types": { 00:22:44.313 "read": true, 00:22:44.313 "write": true, 00:22:44.313 "unmap": true, 00:22:44.313 "flush": true, 00:22:44.313 "reset": true, 00:22:44.313 "nvme_admin": false, 00:22:44.313 "nvme_io": false, 00:22:44.313 "nvme_io_md": false, 00:22:44.313 "write_zeroes": true, 00:22:44.313 "zcopy": true, 00:22:44.313 "get_zone_info": false, 00:22:44.313 "zone_management": false, 00:22:44.313 "zone_append": false, 00:22:44.313 "compare": false, 00:22:44.313 "compare_and_write": false, 00:22:44.313 "abort": true, 00:22:44.313 "seek_hole": false, 00:22:44.313 "seek_data": false, 00:22:44.313 "copy": true, 00:22:44.313 "nvme_iov_md": false 00:22:44.313 }, 00:22:44.313 "memory_domains": [ 00:22:44.313 { 00:22:44.313 "dma_device_id": "system", 00:22:44.313 "dma_device_type": 1 00:22:44.313 }, 00:22:44.313 { 00:22:44.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.313 "dma_device_type": 2 00:22:44.313 } 00:22:44.313 ], 00:22:44.313 "driver_specific": {} 00:22:44.313 }' 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:44.313 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:44.601 "name": "BaseBdev4", 00:22:44.601 "aliases": [ 00:22:44.601 "1010648f-44c4-4d57-87f8-705b16a42f94" 00:22:44.601 ], 00:22:44.601 "product_name": "Malloc disk", 00:22:44.601 "block_size": 512, 00:22:44.601 "num_blocks": 65536, 00:22:44.601 "uuid": "1010648f-44c4-4d57-87f8-705b16a42f94", 00:22:44.601 "assigned_rate_limits": { 00:22:44.601 "rw_ios_per_sec": 0, 00:22:44.601 "rw_mbytes_per_sec": 0, 00:22:44.601 "r_mbytes_per_sec": 0, 00:22:44.601 "w_mbytes_per_sec": 0 00:22:44.601 }, 00:22:44.601 "claimed": true, 00:22:44.601 "claim_type": "exclusive_write", 00:22:44.601 "zoned": false, 00:22:44.601 "supported_io_types": { 00:22:44.601 "read": true, 00:22:44.601 "write": true, 00:22:44.601 "unmap": true, 00:22:44.601 "flush": true, 00:22:44.601 "reset": true, 00:22:44.601 "nvme_admin": false, 00:22:44.601 "nvme_io": false, 00:22:44.601 "nvme_io_md": false, 00:22:44.601 "write_zeroes": true, 00:22:44.601 "zcopy": true, 00:22:44.601 "get_zone_info": false, 00:22:44.601 "zone_management": false, 00:22:44.601 "zone_append": false, 00:22:44.601 "compare": false, 00:22:44.601 "compare_and_write": false, 00:22:44.601 "abort": true, 00:22:44.601 "seek_hole": false, 00:22:44.601 "seek_data": false, 00:22:44.601 "copy": true, 00:22:44.601 "nvme_iov_md": false 00:22:44.601 }, 00:22:44.601 "memory_domains": [ 00:22:44.601 { 00:22:44.601 "dma_device_id": "system", 00:22:44.601 "dma_device_type": 1 00:22:44.601 }, 00:22:44.601 { 00:22:44.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.601 "dma_device_type": 2 00:22:44.601 } 00:22:44.601 ], 00:22:44.601 "driver_specific": {} 00:22:44.601 }' 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:44.601 15:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:44.859 [2024-07-23 15:16:40.101508] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.859 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.117 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.117 "name": "Existed_Raid", 00:22:45.117 "uuid": "c1897347-8bc6-466c-8593-591abdf5af87", 00:22:45.117 "strip_size_kb": 0, 00:22:45.117 "state": "online", 00:22:45.117 "raid_level": "raid1", 00:22:45.117 "superblock": true, 00:22:45.117 "num_base_bdevs": 4, 00:22:45.117 "num_base_bdevs_discovered": 3, 00:22:45.117 "num_base_bdevs_operational": 3, 00:22:45.117 "base_bdevs_list": [ 00:22:45.117 { 00:22:45.117 "name": null, 00:22:45.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.117 "is_configured": false, 00:22:45.117 "data_offset": 2048, 00:22:45.117 "data_size": 63488 00:22:45.117 }, 00:22:45.117 { 00:22:45.117 "name": "BaseBdev2", 00:22:45.117 "uuid": "36eca03a-991e-48ae-bfb0-c10fe0a3becc", 00:22:45.117 "is_configured": true, 00:22:45.117 "data_offset": 2048, 00:22:45.117 "data_size": 63488 00:22:45.117 }, 00:22:45.117 { 00:22:45.117 "name": "BaseBdev3", 00:22:45.117 "uuid": "02fe5d1c-9857-42e2-9803-15f452fe5dde", 00:22:45.117 "is_configured": true, 00:22:45.117 "data_offset": 2048, 00:22:45.117 "data_size": 63488 00:22:45.117 }, 00:22:45.117 { 00:22:45.117 "name": "BaseBdev4", 00:22:45.117 "uuid": "1010648f-44c4-4d57-87f8-705b16a42f94", 00:22:45.117 "is_configured": true, 00:22:45.117 "data_offset": 2048, 00:22:45.117 "data_size": 63488 00:22:45.117 } 00:22:45.117 ] 00:22:45.117 }' 00:22:45.117 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.117 15:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.374 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:45.374 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:45.374 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.374 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:45.632 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:45.632 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:45.632 15:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:45.891 [2024-07-23 15:16:41.082320] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:45.891 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:45.891 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:45.891 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.891 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:45.891 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:45.891 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:45.891 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:46.148 [2024-07-23 15:16:41.458934] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:46.148 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:46.148 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:46.148 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:46.148 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.406 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:46.406 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:46.406 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:46.663 [2024-07-23 15:16:41.931609] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:46.663 [2024-07-23 15:16:41.931725] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:46.663 [2024-07-23 15:16:41.944408] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.663 [2024-07-23 15:16:41.944464] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.663 [2024-07-23 15:16:41.944480] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:22:46.663 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:46.663 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:46.663 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.663 15:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:46.922 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:46.922 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:46.922 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:46.922 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:46.922 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:46.922 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:47.180 BaseBdev2 00:22:47.180 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:47.181 15:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:47.181 15:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:47.181 15:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:47.181 15:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:47.181 15:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:47.181 15:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.439 15:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:47.439 [ 00:22:47.439 { 00:22:47.439 "name": "BaseBdev2", 00:22:47.439 "aliases": [ 00:22:47.439 "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3" 00:22:47.439 ], 00:22:47.439 "product_name": "Malloc disk", 00:22:47.439 "block_size": 512, 00:22:47.439 "num_blocks": 65536, 00:22:47.439 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:47.439 "assigned_rate_limits": { 00:22:47.439 "rw_ios_per_sec": 0, 00:22:47.439 "rw_mbytes_per_sec": 0, 00:22:47.439 "r_mbytes_per_sec": 0, 00:22:47.439 "w_mbytes_per_sec": 0 00:22:47.439 }, 00:22:47.439 "claimed": false, 00:22:47.439 "zoned": false, 00:22:47.439 "supported_io_types": { 00:22:47.439 "read": true, 00:22:47.439 "write": true, 00:22:47.439 "unmap": true, 00:22:47.439 "flush": true, 00:22:47.439 "reset": true, 00:22:47.439 "nvme_admin": false, 00:22:47.439 "nvme_io": false, 00:22:47.439 "nvme_io_md": false, 00:22:47.439 "write_zeroes": true, 00:22:47.439 "zcopy": true, 00:22:47.439 "get_zone_info": false, 00:22:47.439 "zone_management": false, 00:22:47.439 "zone_append": false, 00:22:47.439 "compare": false, 00:22:47.439 "compare_and_write": false, 00:22:47.439 "abort": true, 00:22:47.439 "seek_hole": false, 00:22:47.439 "seek_data": false, 00:22:47.439 "copy": true, 00:22:47.439 "nvme_iov_md": false 00:22:47.439 }, 00:22:47.439 "memory_domains": [ 00:22:47.439 { 00:22:47.439 "dma_device_id": "system", 00:22:47.439 "dma_device_type": 1 00:22:47.439 }, 00:22:47.439 { 00:22:47.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.439 "dma_device_type": 2 00:22:47.439 } 00:22:47.439 ], 00:22:47.439 "driver_specific": {} 00:22:47.439 } 00:22:47.439 ] 00:22:47.439 15:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:47.439 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:47.439 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:47.439 15:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:47.697 BaseBdev3 00:22:47.697 15:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:47.697 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:47.697 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:47.697 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:47.697 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:47.697 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:47.697 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.955 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:47.955 [ 00:22:47.955 { 00:22:47.955 "name": "BaseBdev3", 00:22:47.955 "aliases": [ 00:22:47.955 "fe536b5b-df7e-492b-b350-53a99af0e076" 00:22:47.955 ], 00:22:47.955 "product_name": "Malloc disk", 00:22:47.955 "block_size": 512, 00:22:47.955 "num_blocks": 65536, 00:22:47.955 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:47.955 "assigned_rate_limits": { 00:22:47.955 "rw_ios_per_sec": 0, 00:22:47.955 "rw_mbytes_per_sec": 0, 00:22:47.955 "r_mbytes_per_sec": 0, 00:22:47.955 "w_mbytes_per_sec": 0 00:22:47.955 }, 00:22:47.955 "claimed": false, 00:22:47.955 "zoned": false, 00:22:47.955 "supported_io_types": { 00:22:47.955 "read": true, 00:22:47.955 "write": true, 00:22:47.955 "unmap": true, 00:22:47.955 "flush": true, 00:22:47.955 "reset": true, 00:22:47.955 "nvme_admin": false, 00:22:47.955 "nvme_io": false, 00:22:47.955 "nvme_io_md": false, 00:22:47.955 "write_zeroes": true, 00:22:47.955 "zcopy": true, 00:22:47.955 "get_zone_info": false, 00:22:47.955 "zone_management": false, 00:22:47.955 "zone_append": false, 00:22:47.955 "compare": false, 00:22:47.955 "compare_and_write": false, 00:22:47.955 "abort": true, 00:22:47.955 "seek_hole": false, 00:22:47.955 "seek_data": false, 00:22:47.955 "copy": true, 00:22:47.955 "nvme_iov_md": false 00:22:47.955 }, 00:22:47.955 "memory_domains": [ 00:22:47.955 { 00:22:47.955 "dma_device_id": "system", 00:22:47.955 "dma_device_type": 1 00:22:47.955 }, 00:22:47.955 { 00:22:47.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.955 "dma_device_type": 2 00:22:47.955 } 00:22:47.955 ], 00:22:47.955 "driver_specific": {} 00:22:47.955 } 00:22:47.955 ] 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:48.212 BaseBdev4 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:48.212 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:48.470 15:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:48.729 [ 00:22:48.729 { 00:22:48.729 "name": "BaseBdev4", 00:22:48.729 "aliases": [ 00:22:48.729 "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20" 00:22:48.729 ], 00:22:48.729 "product_name": "Malloc disk", 00:22:48.729 "block_size": 512, 00:22:48.729 "num_blocks": 65536, 00:22:48.729 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:48.729 "assigned_rate_limits": { 00:22:48.729 "rw_ios_per_sec": 0, 00:22:48.729 "rw_mbytes_per_sec": 0, 00:22:48.729 "r_mbytes_per_sec": 0, 00:22:48.729 "w_mbytes_per_sec": 0 00:22:48.729 }, 00:22:48.729 "claimed": false, 00:22:48.729 "zoned": false, 00:22:48.729 "supported_io_types": { 00:22:48.729 "read": true, 00:22:48.729 "write": true, 00:22:48.729 "unmap": true, 00:22:48.729 "flush": true, 00:22:48.729 "reset": true, 00:22:48.729 "nvme_admin": false, 00:22:48.729 "nvme_io": false, 00:22:48.729 "nvme_io_md": false, 00:22:48.729 "write_zeroes": true, 00:22:48.729 "zcopy": true, 00:22:48.729 "get_zone_info": false, 00:22:48.729 "zone_management": false, 00:22:48.729 "zone_append": false, 00:22:48.729 "compare": false, 00:22:48.729 "compare_and_write": false, 00:22:48.729 "abort": true, 00:22:48.729 "seek_hole": false, 00:22:48.729 "seek_data": false, 00:22:48.729 "copy": true, 00:22:48.729 "nvme_iov_md": false 00:22:48.729 }, 00:22:48.729 "memory_domains": [ 00:22:48.729 { 00:22:48.729 "dma_device_id": "system", 00:22:48.729 "dma_device_type": 1 00:22:48.729 }, 00:22:48.729 { 00:22:48.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.729 "dma_device_type": 2 00:22:48.729 } 00:22:48.729 ], 00:22:48.729 "driver_specific": {} 00:22:48.729 } 00:22:48.729 ] 00:22:48.729 15:16:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:48.729 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:48.729 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:48.729 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:48.987 [2024-07-23 15:16:44.201003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:48.987 [2024-07-23 15:16:44.201067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:48.987 [2024-07-23 15:16:44.201094] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:48.987 [2024-07-23 15:16:44.203201] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:48.987 [2024-07-23 15:16:44.203256] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:48.987 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:48.987 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:48.987 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:48.987 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:48.987 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:48.987 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:48.987 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:48.987 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:48.988 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:48.988 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:48.988 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.988 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.988 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.988 "name": "Existed_Raid", 00:22:48.988 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:48.988 "strip_size_kb": 0, 00:22:48.988 "state": "configuring", 00:22:48.988 "raid_level": "raid1", 00:22:48.988 "superblock": true, 00:22:48.988 "num_base_bdevs": 4, 00:22:48.988 "num_base_bdevs_discovered": 3, 00:22:48.988 "num_base_bdevs_operational": 4, 00:22:48.988 "base_bdevs_list": [ 00:22:48.988 { 00:22:48.988 "name": "BaseBdev1", 00:22:48.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.988 "is_configured": false, 00:22:48.988 "data_offset": 0, 00:22:48.988 "data_size": 0 00:22:48.988 }, 00:22:48.988 { 00:22:48.988 "name": "BaseBdev2", 00:22:48.988 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:48.988 "is_configured": true, 00:22:48.988 "data_offset": 2048, 00:22:48.988 "data_size": 63488 00:22:48.988 }, 00:22:48.988 { 00:22:48.988 "name": "BaseBdev3", 00:22:48.988 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:48.988 "is_configured": true, 00:22:48.988 "data_offset": 2048, 00:22:48.988 "data_size": 63488 00:22:48.988 }, 00:22:48.988 { 00:22:48.988 "name": "BaseBdev4", 00:22:48.988 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:48.988 "is_configured": true, 00:22:48.988 "data_offset": 2048, 00:22:48.988 "data_size": 63488 00:22:48.988 } 00:22:48.988 ] 00:22:48.988 }' 00:22:48.988 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.988 15:16:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.553 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:49.553 [2024-07-23 15:16:44.977151] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.810 15:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.810 15:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.810 "name": "Existed_Raid", 00:22:49.810 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:49.810 "strip_size_kb": 0, 00:22:49.810 "state": "configuring", 00:22:49.810 "raid_level": "raid1", 00:22:49.810 "superblock": true, 00:22:49.810 "num_base_bdevs": 4, 00:22:49.810 "num_base_bdevs_discovered": 2, 00:22:49.810 "num_base_bdevs_operational": 4, 00:22:49.810 "base_bdevs_list": [ 00:22:49.810 { 00:22:49.810 "name": "BaseBdev1", 00:22:49.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.810 "is_configured": false, 00:22:49.810 "data_offset": 0, 00:22:49.810 "data_size": 0 00:22:49.810 }, 00:22:49.810 { 00:22:49.810 "name": null, 00:22:49.810 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:49.810 "is_configured": false, 00:22:49.810 "data_offset": 2048, 00:22:49.810 "data_size": 63488 00:22:49.810 }, 00:22:49.810 { 00:22:49.810 "name": "BaseBdev3", 00:22:49.810 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:49.810 "is_configured": true, 00:22:49.810 "data_offset": 2048, 00:22:49.810 "data_size": 63488 00:22:49.810 }, 00:22:49.810 { 00:22:49.810 "name": "BaseBdev4", 00:22:49.810 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:49.810 "is_configured": true, 00:22:49.810 "data_offset": 2048, 00:22:49.810 "data_size": 63488 00:22:49.810 } 00:22:49.810 ] 00:22:49.810 }' 00:22:49.810 15:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.810 15:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.068 15:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:50.068 15:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.325 15:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:50.325 15:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:50.583 [2024-07-23 15:16:45.860511] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:50.583 BaseBdev1 00:22:50.583 15:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:50.583 15:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:50.583 15:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:50.583 15:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:50.583 15:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:50.583 15:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:50.583 15:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:50.841 15:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:51.099 [ 00:22:51.099 { 00:22:51.099 "name": "BaseBdev1", 00:22:51.099 "aliases": [ 00:22:51.099 "de951a4e-5606-460f-a3c6-aa9696296b57" 00:22:51.099 ], 00:22:51.099 "product_name": "Malloc disk", 00:22:51.099 "block_size": 512, 00:22:51.099 "num_blocks": 65536, 00:22:51.099 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:51.099 "assigned_rate_limits": { 00:22:51.099 "rw_ios_per_sec": 0, 00:22:51.099 "rw_mbytes_per_sec": 0, 00:22:51.099 "r_mbytes_per_sec": 0, 00:22:51.099 "w_mbytes_per_sec": 0 00:22:51.099 }, 00:22:51.099 "claimed": true, 00:22:51.099 "claim_type": "exclusive_write", 00:22:51.099 "zoned": false, 00:22:51.099 "supported_io_types": { 00:22:51.099 "read": true, 00:22:51.099 "write": true, 00:22:51.099 "unmap": true, 00:22:51.099 "flush": true, 00:22:51.099 "reset": true, 00:22:51.099 "nvme_admin": false, 00:22:51.099 "nvme_io": false, 00:22:51.099 "nvme_io_md": false, 00:22:51.099 "write_zeroes": true, 00:22:51.099 "zcopy": true, 00:22:51.099 "get_zone_info": false, 00:22:51.099 "zone_management": false, 00:22:51.099 "zone_append": false, 00:22:51.099 "compare": false, 00:22:51.099 "compare_and_write": false, 00:22:51.099 "abort": true, 00:22:51.099 "seek_hole": false, 00:22:51.099 "seek_data": false, 00:22:51.099 "copy": true, 00:22:51.099 "nvme_iov_md": false 00:22:51.099 }, 00:22:51.099 "memory_domains": [ 00:22:51.099 { 00:22:51.099 "dma_device_id": "system", 00:22:51.099 "dma_device_type": 1 00:22:51.099 }, 00:22:51.099 { 00:22:51.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.099 "dma_device_type": 2 00:22:51.099 } 00:22:51.099 ], 00:22:51.099 "driver_specific": {} 00:22:51.099 } 00:22:51.099 ] 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.099 "name": "Existed_Raid", 00:22:51.099 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:51.099 "strip_size_kb": 0, 00:22:51.099 "state": "configuring", 00:22:51.099 "raid_level": "raid1", 00:22:51.099 "superblock": true, 00:22:51.099 "num_base_bdevs": 4, 00:22:51.099 "num_base_bdevs_discovered": 3, 00:22:51.099 "num_base_bdevs_operational": 4, 00:22:51.099 "base_bdevs_list": [ 00:22:51.099 { 00:22:51.099 "name": "BaseBdev1", 00:22:51.099 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:51.099 "is_configured": true, 00:22:51.099 "data_offset": 2048, 00:22:51.099 "data_size": 63488 00:22:51.099 }, 00:22:51.099 { 00:22:51.099 "name": null, 00:22:51.099 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:51.099 "is_configured": false, 00:22:51.099 "data_offset": 2048, 00:22:51.099 "data_size": 63488 00:22:51.099 }, 00:22:51.099 { 00:22:51.099 "name": "BaseBdev3", 00:22:51.099 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:51.099 "is_configured": true, 00:22:51.099 "data_offset": 2048, 00:22:51.099 "data_size": 63488 00:22:51.099 }, 00:22:51.099 { 00:22:51.099 "name": "BaseBdev4", 00:22:51.099 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:51.099 "is_configured": true, 00:22:51.099 "data_offset": 2048, 00:22:51.099 "data_size": 63488 00:22:51.099 } 00:22:51.099 ] 00:22:51.099 }' 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.099 15:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.357 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.357 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:51.615 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:51.615 15:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:51.872 [2024-07-23 15:16:47.088903] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.872 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.872 "name": "Existed_Raid", 00:22:51.872 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:51.872 "strip_size_kb": 0, 00:22:51.872 "state": "configuring", 00:22:51.872 "raid_level": "raid1", 00:22:51.872 "superblock": true, 00:22:51.872 "num_base_bdevs": 4, 00:22:51.872 "num_base_bdevs_discovered": 2, 00:22:51.872 "num_base_bdevs_operational": 4, 00:22:51.872 "base_bdevs_list": [ 00:22:51.872 { 00:22:51.873 "name": "BaseBdev1", 00:22:51.873 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:51.873 "is_configured": true, 00:22:51.873 "data_offset": 2048, 00:22:51.873 "data_size": 63488 00:22:51.873 }, 00:22:51.873 { 00:22:51.873 "name": null, 00:22:51.873 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:51.873 "is_configured": false, 00:22:51.873 "data_offset": 2048, 00:22:51.873 "data_size": 63488 00:22:51.873 }, 00:22:51.873 { 00:22:51.873 "name": null, 00:22:51.873 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:51.873 "is_configured": false, 00:22:51.873 "data_offset": 2048, 00:22:51.873 "data_size": 63488 00:22:51.873 }, 00:22:51.873 { 00:22:51.873 "name": "BaseBdev4", 00:22:51.873 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:51.873 "is_configured": true, 00:22:51.873 "data_offset": 2048, 00:22:51.873 "data_size": 63488 00:22:51.873 } 00:22:51.873 ] 00:22:51.873 }' 00:22:51.873 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.873 15:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.138 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.138 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:52.411 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:52.411 15:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:52.669 [2024-07-23 15:16:48.061161] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.669 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.927 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.927 "name": "Existed_Raid", 00:22:52.927 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:52.927 "strip_size_kb": 0, 00:22:52.927 "state": "configuring", 00:22:52.927 "raid_level": "raid1", 00:22:52.927 "superblock": true, 00:22:52.927 "num_base_bdevs": 4, 00:22:52.927 "num_base_bdevs_discovered": 3, 00:22:52.927 "num_base_bdevs_operational": 4, 00:22:52.927 "base_bdevs_list": [ 00:22:52.927 { 00:22:52.927 "name": "BaseBdev1", 00:22:52.927 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:52.927 "is_configured": true, 00:22:52.927 "data_offset": 2048, 00:22:52.927 "data_size": 63488 00:22:52.927 }, 00:22:52.927 { 00:22:52.927 "name": null, 00:22:52.927 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:52.927 "is_configured": false, 00:22:52.927 "data_offset": 2048, 00:22:52.927 "data_size": 63488 00:22:52.927 }, 00:22:52.927 { 00:22:52.927 "name": "BaseBdev3", 00:22:52.927 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:52.927 "is_configured": true, 00:22:52.927 "data_offset": 2048, 00:22:52.927 "data_size": 63488 00:22:52.927 }, 00:22:52.927 { 00:22:52.927 "name": "BaseBdev4", 00:22:52.927 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:52.927 "is_configured": true, 00:22:52.927 "data_offset": 2048, 00:22:52.927 "data_size": 63488 00:22:52.927 } 00:22:52.927 ] 00:22:52.927 }' 00:22:52.927 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.927 15:16:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.185 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.185 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:53.444 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:53.444 15:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:53.702 [2024-07-23 15:16:49.013460] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.702 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.960 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.960 "name": "Existed_Raid", 00:22:53.960 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:53.960 "strip_size_kb": 0, 00:22:53.960 "state": "configuring", 00:22:53.960 "raid_level": "raid1", 00:22:53.960 "superblock": true, 00:22:53.960 "num_base_bdevs": 4, 00:22:53.960 "num_base_bdevs_discovered": 2, 00:22:53.960 "num_base_bdevs_operational": 4, 00:22:53.960 "base_bdevs_list": [ 00:22:53.960 { 00:22:53.960 "name": null, 00:22:53.960 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:53.960 "is_configured": false, 00:22:53.960 "data_offset": 2048, 00:22:53.960 "data_size": 63488 00:22:53.960 }, 00:22:53.960 { 00:22:53.960 "name": null, 00:22:53.960 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:53.960 "is_configured": false, 00:22:53.960 "data_offset": 2048, 00:22:53.960 "data_size": 63488 00:22:53.960 }, 00:22:53.960 { 00:22:53.960 "name": "BaseBdev3", 00:22:53.960 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:53.960 "is_configured": true, 00:22:53.960 "data_offset": 2048, 00:22:53.960 "data_size": 63488 00:22:53.960 }, 00:22:53.960 { 00:22:53.960 "name": "BaseBdev4", 00:22:53.960 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:53.960 "is_configured": true, 00:22:53.960 "data_offset": 2048, 00:22:53.960 "data_size": 63488 00:22:53.960 } 00:22:53.960 ] 00:22:53.960 }' 00:22:53.960 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.960 15:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.219 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.219 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:54.477 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:54.477 15:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:54.735 [2024-07-23 15:16:50.114216] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:54.735 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:54.735 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:54.735 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:54.735 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.735 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.735 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:54.736 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.736 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.736 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.736 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.736 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.736 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.994 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.994 "name": "Existed_Raid", 00:22:54.994 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:54.994 "strip_size_kb": 0, 00:22:54.994 "state": "configuring", 00:22:54.994 "raid_level": "raid1", 00:22:54.994 "superblock": true, 00:22:54.994 "num_base_bdevs": 4, 00:22:54.994 "num_base_bdevs_discovered": 3, 00:22:54.994 "num_base_bdevs_operational": 4, 00:22:54.994 "base_bdevs_list": [ 00:22:54.994 { 00:22:54.994 "name": null, 00:22:54.994 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:54.994 "is_configured": false, 00:22:54.994 "data_offset": 2048, 00:22:54.994 "data_size": 63488 00:22:54.994 }, 00:22:54.994 { 00:22:54.994 "name": "BaseBdev2", 00:22:54.994 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:54.994 "is_configured": true, 00:22:54.994 "data_offset": 2048, 00:22:54.994 "data_size": 63488 00:22:54.994 }, 00:22:54.994 { 00:22:54.994 "name": "BaseBdev3", 00:22:54.994 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:54.994 "is_configured": true, 00:22:54.994 "data_offset": 2048, 00:22:54.994 "data_size": 63488 00:22:54.994 }, 00:22:54.994 { 00:22:54.994 "name": "BaseBdev4", 00:22:54.994 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:54.994 "is_configured": true, 00:22:54.994 "data_offset": 2048, 00:22:54.994 "data_size": 63488 00:22:54.994 } 00:22:54.994 ] 00:22:54.994 }' 00:22:54.994 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.994 15:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.251 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.251 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:55.509 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:55.509 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.509 15:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:55.768 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u de951a4e-5606-460f-a3c6-aa9696296b57 00:22:55.768 [2024-07-23 15:16:51.173649] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:55.768 [2024-07-23 15:16:51.173854] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:22:55.768 [2024-07-23 15:16:51.173873] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:55.768 [2024-07-23 15:16:51.173948] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:22:55.768 [2024-07-23 15:16:51.174249] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:22:55.768 [2024-07-23 15:16:51.174261] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008180 00:22:55.768 [2024-07-23 15:16:51.174357] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.768 NewBaseBdev 00:22:55.768 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:55.768 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:55.768 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:55.768 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:55.768 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:55.768 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:55.768 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:56.027 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:56.285 [ 00:22:56.285 { 00:22:56.285 "name": "NewBaseBdev", 00:22:56.285 "aliases": [ 00:22:56.285 "de951a4e-5606-460f-a3c6-aa9696296b57" 00:22:56.285 ], 00:22:56.285 "product_name": "Malloc disk", 00:22:56.285 "block_size": 512, 00:22:56.285 "num_blocks": 65536, 00:22:56.286 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:56.286 "assigned_rate_limits": { 00:22:56.286 "rw_ios_per_sec": 0, 00:22:56.286 "rw_mbytes_per_sec": 0, 00:22:56.286 "r_mbytes_per_sec": 0, 00:22:56.286 "w_mbytes_per_sec": 0 00:22:56.286 }, 00:22:56.286 "claimed": true, 00:22:56.286 "claim_type": "exclusive_write", 00:22:56.286 "zoned": false, 00:22:56.286 "supported_io_types": { 00:22:56.286 "read": true, 00:22:56.286 "write": true, 00:22:56.286 "unmap": true, 00:22:56.286 "flush": true, 00:22:56.286 "reset": true, 00:22:56.286 "nvme_admin": false, 00:22:56.286 "nvme_io": false, 00:22:56.286 "nvme_io_md": false, 00:22:56.286 "write_zeroes": true, 00:22:56.286 "zcopy": true, 00:22:56.286 "get_zone_info": false, 00:22:56.286 "zone_management": false, 00:22:56.286 "zone_append": false, 00:22:56.286 "compare": false, 00:22:56.286 "compare_and_write": false, 00:22:56.286 "abort": true, 00:22:56.286 "seek_hole": false, 00:22:56.286 "seek_data": false, 00:22:56.286 "copy": true, 00:22:56.286 "nvme_iov_md": false 00:22:56.286 }, 00:22:56.286 "memory_domains": [ 00:22:56.286 { 00:22:56.286 "dma_device_id": "system", 00:22:56.286 "dma_device_type": 1 00:22:56.286 }, 00:22:56.286 { 00:22:56.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.286 "dma_device_type": 2 00:22:56.286 } 00:22:56.286 ], 00:22:56.286 "driver_specific": {} 00:22:56.286 } 00:22:56.286 ] 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.286 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.544 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.544 "name": "Existed_Raid", 00:22:56.544 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:56.544 "strip_size_kb": 0, 00:22:56.544 "state": "online", 00:22:56.544 "raid_level": "raid1", 00:22:56.544 "superblock": true, 00:22:56.544 "num_base_bdevs": 4, 00:22:56.544 "num_base_bdevs_discovered": 4, 00:22:56.544 "num_base_bdevs_operational": 4, 00:22:56.544 "base_bdevs_list": [ 00:22:56.544 { 00:22:56.544 "name": "NewBaseBdev", 00:22:56.544 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:56.544 "is_configured": true, 00:22:56.544 "data_offset": 2048, 00:22:56.544 "data_size": 63488 00:22:56.544 }, 00:22:56.544 { 00:22:56.544 "name": "BaseBdev2", 00:22:56.544 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:56.544 "is_configured": true, 00:22:56.544 "data_offset": 2048, 00:22:56.544 "data_size": 63488 00:22:56.544 }, 00:22:56.544 { 00:22:56.544 "name": "BaseBdev3", 00:22:56.544 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:56.544 "is_configured": true, 00:22:56.544 "data_offset": 2048, 00:22:56.544 "data_size": 63488 00:22:56.544 }, 00:22:56.544 { 00:22:56.544 "name": "BaseBdev4", 00:22:56.544 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:56.544 "is_configured": true, 00:22:56.544 "data_offset": 2048, 00:22:56.544 "data_size": 63488 00:22:56.544 } 00:22:56.544 ] 00:22:56.544 }' 00:22:56.544 15:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.544 15:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.803 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:56.803 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:56.803 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:56.803 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:56.803 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:56.803 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:56.803 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:56.803 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:57.062 [2024-07-23 15:16:52.370354] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.062 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:57.062 "name": "Existed_Raid", 00:22:57.062 "aliases": [ 00:22:57.062 "9527876b-fcf5-4044-9efe-26611cda2cd1" 00:22:57.062 ], 00:22:57.062 "product_name": "Raid Volume", 00:22:57.062 "block_size": 512, 00:22:57.062 "num_blocks": 63488, 00:22:57.062 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:57.062 "assigned_rate_limits": { 00:22:57.062 "rw_ios_per_sec": 0, 00:22:57.062 "rw_mbytes_per_sec": 0, 00:22:57.062 "r_mbytes_per_sec": 0, 00:22:57.062 "w_mbytes_per_sec": 0 00:22:57.062 }, 00:22:57.062 "claimed": false, 00:22:57.062 "zoned": false, 00:22:57.062 "supported_io_types": { 00:22:57.062 "read": true, 00:22:57.062 "write": true, 00:22:57.062 "unmap": false, 00:22:57.062 "flush": false, 00:22:57.062 "reset": true, 00:22:57.062 "nvme_admin": false, 00:22:57.062 "nvme_io": false, 00:22:57.062 "nvme_io_md": false, 00:22:57.062 "write_zeroes": true, 00:22:57.062 "zcopy": false, 00:22:57.062 "get_zone_info": false, 00:22:57.062 "zone_management": false, 00:22:57.062 "zone_append": false, 00:22:57.062 "compare": false, 00:22:57.062 "compare_and_write": false, 00:22:57.062 "abort": false, 00:22:57.062 "seek_hole": false, 00:22:57.062 "seek_data": false, 00:22:57.062 "copy": false, 00:22:57.062 "nvme_iov_md": false 00:22:57.062 }, 00:22:57.062 "memory_domains": [ 00:22:57.062 { 00:22:57.062 "dma_device_id": "system", 00:22:57.062 "dma_device_type": 1 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.062 "dma_device_type": 2 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "dma_device_id": "system", 00:22:57.062 "dma_device_type": 1 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.062 "dma_device_type": 2 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "dma_device_id": "system", 00:22:57.062 "dma_device_type": 1 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.062 "dma_device_type": 2 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "dma_device_id": "system", 00:22:57.062 "dma_device_type": 1 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.062 "dma_device_type": 2 00:22:57.062 } 00:22:57.062 ], 00:22:57.062 "driver_specific": { 00:22:57.062 "raid": { 00:22:57.062 "uuid": "9527876b-fcf5-4044-9efe-26611cda2cd1", 00:22:57.062 "strip_size_kb": 0, 00:22:57.062 "state": "online", 00:22:57.062 "raid_level": "raid1", 00:22:57.062 "superblock": true, 00:22:57.062 "num_base_bdevs": 4, 00:22:57.062 "num_base_bdevs_discovered": 4, 00:22:57.062 "num_base_bdevs_operational": 4, 00:22:57.062 "base_bdevs_list": [ 00:22:57.062 { 00:22:57.062 "name": "NewBaseBdev", 00:22:57.062 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:57.062 "is_configured": true, 00:22:57.062 "data_offset": 2048, 00:22:57.062 "data_size": 63488 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "name": "BaseBdev2", 00:22:57.062 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:57.062 "is_configured": true, 00:22:57.062 "data_offset": 2048, 00:22:57.062 "data_size": 63488 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "name": "BaseBdev3", 00:22:57.062 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:57.062 "is_configured": true, 00:22:57.062 "data_offset": 2048, 00:22:57.062 "data_size": 63488 00:22:57.062 }, 00:22:57.062 { 00:22:57.062 "name": "BaseBdev4", 00:22:57.062 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:57.062 "is_configured": true, 00:22:57.062 "data_offset": 2048, 00:22:57.062 "data_size": 63488 00:22:57.062 } 00:22:57.062 ] 00:22:57.062 } 00:22:57.062 } 00:22:57.062 }' 00:22:57.062 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:57.062 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:57.062 BaseBdev2 00:22:57.062 BaseBdev3 00:22:57.062 BaseBdev4' 00:22:57.062 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.062 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:57.062 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.321 "name": "NewBaseBdev", 00:22:57.321 "aliases": [ 00:22:57.321 "de951a4e-5606-460f-a3c6-aa9696296b57" 00:22:57.321 ], 00:22:57.321 "product_name": "Malloc disk", 00:22:57.321 "block_size": 512, 00:22:57.321 "num_blocks": 65536, 00:22:57.321 "uuid": "de951a4e-5606-460f-a3c6-aa9696296b57", 00:22:57.321 "assigned_rate_limits": { 00:22:57.321 "rw_ios_per_sec": 0, 00:22:57.321 "rw_mbytes_per_sec": 0, 00:22:57.321 "r_mbytes_per_sec": 0, 00:22:57.321 "w_mbytes_per_sec": 0 00:22:57.321 }, 00:22:57.321 "claimed": true, 00:22:57.321 "claim_type": "exclusive_write", 00:22:57.321 "zoned": false, 00:22:57.321 "supported_io_types": { 00:22:57.321 "read": true, 00:22:57.321 "write": true, 00:22:57.321 "unmap": true, 00:22:57.321 "flush": true, 00:22:57.321 "reset": true, 00:22:57.321 "nvme_admin": false, 00:22:57.321 "nvme_io": false, 00:22:57.321 "nvme_io_md": false, 00:22:57.321 "write_zeroes": true, 00:22:57.321 "zcopy": true, 00:22:57.321 "get_zone_info": false, 00:22:57.321 "zone_management": false, 00:22:57.321 "zone_append": false, 00:22:57.321 "compare": false, 00:22:57.321 "compare_and_write": false, 00:22:57.321 "abort": true, 00:22:57.321 "seek_hole": false, 00:22:57.321 "seek_data": false, 00:22:57.321 "copy": true, 00:22:57.321 "nvme_iov_md": false 00:22:57.321 }, 00:22:57.321 "memory_domains": [ 00:22:57.321 { 00:22:57.321 "dma_device_id": "system", 00:22:57.321 "dma_device_type": 1 00:22:57.321 }, 00:22:57.321 { 00:22:57.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.321 "dma_device_type": 2 00:22:57.321 } 00:22:57.321 ], 00:22:57.321 "driver_specific": {} 00:22:57.321 }' 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:57.321 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.579 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.579 "name": "BaseBdev2", 00:22:57.579 "aliases": [ 00:22:57.579 "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3" 00:22:57.579 ], 00:22:57.579 "product_name": "Malloc disk", 00:22:57.579 "block_size": 512, 00:22:57.579 "num_blocks": 65536, 00:22:57.579 "uuid": "a49e11d9-a6be-4f3f-ab1c-76038f1b0cd3", 00:22:57.579 "assigned_rate_limits": { 00:22:57.579 "rw_ios_per_sec": 0, 00:22:57.579 "rw_mbytes_per_sec": 0, 00:22:57.579 "r_mbytes_per_sec": 0, 00:22:57.579 "w_mbytes_per_sec": 0 00:22:57.579 }, 00:22:57.579 "claimed": true, 00:22:57.579 "claim_type": "exclusive_write", 00:22:57.579 "zoned": false, 00:22:57.580 "supported_io_types": { 00:22:57.580 "read": true, 00:22:57.580 "write": true, 00:22:57.580 "unmap": true, 00:22:57.580 "flush": true, 00:22:57.580 "reset": true, 00:22:57.580 "nvme_admin": false, 00:22:57.580 "nvme_io": false, 00:22:57.580 "nvme_io_md": false, 00:22:57.580 "write_zeroes": true, 00:22:57.580 "zcopy": true, 00:22:57.580 "get_zone_info": false, 00:22:57.580 "zone_management": false, 00:22:57.580 "zone_append": false, 00:22:57.580 "compare": false, 00:22:57.580 "compare_and_write": false, 00:22:57.580 "abort": true, 00:22:57.580 "seek_hole": false, 00:22:57.580 "seek_data": false, 00:22:57.580 "copy": true, 00:22:57.580 "nvme_iov_md": false 00:22:57.580 }, 00:22:57.580 "memory_domains": [ 00:22:57.580 { 00:22:57.580 "dma_device_id": "system", 00:22:57.580 "dma_device_type": 1 00:22:57.580 }, 00:22:57.580 { 00:22:57.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.580 "dma_device_type": 2 00:22:57.580 } 00:22:57.580 ], 00:22:57.580 "driver_specific": {} 00:22:57.580 }' 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.580 15:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:57.838 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.838 "name": "BaseBdev3", 00:22:57.838 "aliases": [ 00:22:57.838 "fe536b5b-df7e-492b-b350-53a99af0e076" 00:22:57.838 ], 00:22:57.838 "product_name": "Malloc disk", 00:22:57.838 "block_size": 512, 00:22:57.838 "num_blocks": 65536, 00:22:57.838 "uuid": "fe536b5b-df7e-492b-b350-53a99af0e076", 00:22:57.838 "assigned_rate_limits": { 00:22:57.838 "rw_ios_per_sec": 0, 00:22:57.838 "rw_mbytes_per_sec": 0, 00:22:57.838 "r_mbytes_per_sec": 0, 00:22:57.838 "w_mbytes_per_sec": 0 00:22:57.838 }, 00:22:57.838 "claimed": true, 00:22:57.838 "claim_type": "exclusive_write", 00:22:57.838 "zoned": false, 00:22:57.838 "supported_io_types": { 00:22:57.838 "read": true, 00:22:57.838 "write": true, 00:22:57.838 "unmap": true, 00:22:57.838 "flush": true, 00:22:57.838 "reset": true, 00:22:57.838 "nvme_admin": false, 00:22:57.838 "nvme_io": false, 00:22:57.838 "nvme_io_md": false, 00:22:57.838 "write_zeroes": true, 00:22:57.838 "zcopy": true, 00:22:57.838 "get_zone_info": false, 00:22:57.838 "zone_management": false, 00:22:57.838 "zone_append": false, 00:22:57.838 "compare": false, 00:22:57.838 "compare_and_write": false, 00:22:57.838 "abort": true, 00:22:57.838 "seek_hole": false, 00:22:57.838 "seek_data": false, 00:22:57.838 "copy": true, 00:22:57.838 "nvme_iov_md": false 00:22:57.838 }, 00:22:57.838 "memory_domains": [ 00:22:57.838 { 00:22:57.838 "dma_device_id": "system", 00:22:57.838 "dma_device_type": 1 00:22:57.838 }, 00:22:57.838 { 00:22:57.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.838 "dma_device_type": 2 00:22:57.838 } 00:22:57.838 ], 00:22:57.838 "driver_specific": {} 00:22:57.838 }' 00:22:57.838 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.838 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.838 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.838 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:58.098 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.356 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.356 "name": "BaseBdev4", 00:22:58.356 "aliases": [ 00:22:58.356 "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20" 00:22:58.356 ], 00:22:58.356 "product_name": "Malloc disk", 00:22:58.356 "block_size": 512, 00:22:58.356 "num_blocks": 65536, 00:22:58.356 "uuid": "6ee9799b-fad3-4a79-b7c2-7bd7fedc9f20", 00:22:58.356 "assigned_rate_limits": { 00:22:58.356 "rw_ios_per_sec": 0, 00:22:58.356 "rw_mbytes_per_sec": 0, 00:22:58.356 "r_mbytes_per_sec": 0, 00:22:58.356 "w_mbytes_per_sec": 0 00:22:58.356 }, 00:22:58.356 "claimed": true, 00:22:58.356 "claim_type": "exclusive_write", 00:22:58.356 "zoned": false, 00:22:58.356 "supported_io_types": { 00:22:58.356 "read": true, 00:22:58.356 "write": true, 00:22:58.356 "unmap": true, 00:22:58.356 "flush": true, 00:22:58.356 "reset": true, 00:22:58.356 "nvme_admin": false, 00:22:58.356 "nvme_io": false, 00:22:58.356 "nvme_io_md": false, 00:22:58.356 "write_zeroes": true, 00:22:58.356 "zcopy": true, 00:22:58.356 "get_zone_info": false, 00:22:58.356 "zone_management": false, 00:22:58.356 "zone_append": false, 00:22:58.356 "compare": false, 00:22:58.356 "compare_and_write": false, 00:22:58.356 "abort": true, 00:22:58.356 "seek_hole": false, 00:22:58.356 "seek_data": false, 00:22:58.356 "copy": true, 00:22:58.356 "nvme_iov_md": false 00:22:58.356 }, 00:22:58.356 "memory_domains": [ 00:22:58.357 { 00:22:58.357 "dma_device_id": "system", 00:22:58.357 "dma_device_type": 1 00:22:58.357 }, 00:22:58.357 { 00:22:58.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.357 "dma_device_type": 2 00:22:58.357 } 00:22:58.357 ], 00:22:58.357 "driver_specific": {} 00:22:58.357 }' 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.357 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:58.614 [2024-07-23 15:16:53.946382] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:58.614 [2024-07-23 15:16:53.946426] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:58.614 [2024-07-23 15:16:53.946506] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.614 [2024-07-23 15:16:53.946766] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.614 [2024-07-23 15:16:53.946782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name Existed_Raid, state offline 00:22:58.614 15:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 105303 00:22:58.614 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 105303 ']' 00:22:58.614 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 105303 00:22:58.614 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:58.615 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.615 15:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105303 00:22:58.615 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.615 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.615 killing process with pid 105303 00:22:58.615 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105303' 00:22:58.615 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 105303 00:22:58.615 [2024-07-23 15:16:54.006896] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:58.615 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 105303 00:22:58.872 [2024-07-23 15:16:54.054172] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:58.872 15:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:58.872 00:22:58.872 real 0m23.250s 00:22:58.872 user 0m40.415s 00:22:58.872 sys 0m5.238s 00:22:58.872 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:58.872 15:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.872 ************************************ 00:22:58.872 END TEST raid_state_function_test_sb 00:22:58.872 ************************************ 00:22:59.130 15:16:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:59.130 15:16:54 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:59.130 15:16:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:59.130 15:16:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.130 15:16:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:59.130 ************************************ 00:22:59.130 START TEST raid_superblock_test 00:22:59.130 ************************************ 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=106253 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 106253 /var/tmp/spdk-raid.sock 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 106253 ']' 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.130 15:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.130 [2024-07-23 15:16:54.436714] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:22:59.130 [2024-07-23 15:16:54.437696] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106253 ] 00:22:59.388 [2024-07-23 15:16:54.590992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.388 [2024-07-23 15:16:54.636695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.388 [2024-07-23 15:16:54.681477] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:59.954 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:00.243 malloc1 00:23:00.243 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:00.503 [2024-07-23 15:16:55.780855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:00.503 [2024-07-23 15:16:55.780946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.503 [2024-07-23 15:16:55.780979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:23:00.503 [2024-07-23 15:16:55.780996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.503 [2024-07-23 15:16:55.783537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.503 [2024-07-23 15:16:55.783593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:00.503 pt1 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:00.503 15:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:00.761 malloc2 00:23:00.761 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:01.018 [2024-07-23 15:16:56.194462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:01.018 [2024-07-23 15:16:56.194542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.018 [2024-07-23 15:16:56.194566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:23:01.018 [2024-07-23 15:16:56.194584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.019 [2024-07-23 15:16:56.197071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.019 [2024-07-23 15:16:56.197114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:01.019 pt2 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:01.019 malloc3 00:23:01.019 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:01.277 [2024-07-23 15:16:56.562869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:01.277 [2024-07-23 15:16:56.562945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.277 [2024-07-23 15:16:56.562969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:23:01.277 [2024-07-23 15:16:56.562986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.277 [2024-07-23 15:16:56.565381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.277 [2024-07-23 15:16:56.565428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:01.277 pt3 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:01.277 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:01.535 malloc4 00:23:01.535 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:01.792 [2024-07-23 15:16:56.976323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:01.792 [2024-07-23 15:16:56.976407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.792 [2024-07-23 15:16:56.976433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:23:01.792 [2024-07-23 15:16:56.976448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.792 [2024-07-23 15:16:56.979146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.792 [2024-07-23 15:16:56.979201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:01.792 pt4 00:23:01.792 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:01.792 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:01.792 15:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:01.792 [2024-07-23 15:16:57.156430] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:01.792 [2024-07-23 15:16:57.158684] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:01.792 [2024-07-23 15:16:57.158758] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:01.792 [2024-07-23 15:16:57.158827] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:01.792 [2024-07-23 15:16:57.159028] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:23:01.792 [2024-07-23 15:16:57.159044] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:01.792 [2024-07-23 15:16:57.159171] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:23:01.792 [2024-07-23 15:16:57.159540] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:23:01.792 [2024-07-23 15:16:57.159567] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:23:01.792 [2024-07-23 15:16:57.159699] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.792 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.050 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.050 "name": "raid_bdev1", 00:23:02.050 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:02.050 "strip_size_kb": 0, 00:23:02.050 "state": "online", 00:23:02.050 "raid_level": "raid1", 00:23:02.050 "superblock": true, 00:23:02.050 "num_base_bdevs": 4, 00:23:02.050 "num_base_bdevs_discovered": 4, 00:23:02.050 "num_base_bdevs_operational": 4, 00:23:02.050 "base_bdevs_list": [ 00:23:02.050 { 00:23:02.050 "name": "pt1", 00:23:02.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:02.050 "is_configured": true, 00:23:02.050 "data_offset": 2048, 00:23:02.050 "data_size": 63488 00:23:02.050 }, 00:23:02.050 { 00:23:02.050 "name": "pt2", 00:23:02.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:02.050 "is_configured": true, 00:23:02.050 "data_offset": 2048, 00:23:02.050 "data_size": 63488 00:23:02.050 }, 00:23:02.050 { 00:23:02.050 "name": "pt3", 00:23:02.050 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:02.050 "is_configured": true, 00:23:02.050 "data_offset": 2048, 00:23:02.050 "data_size": 63488 00:23:02.050 }, 00:23:02.050 { 00:23:02.050 "name": "pt4", 00:23:02.050 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:02.050 "is_configured": true, 00:23:02.050 "data_offset": 2048, 00:23:02.050 "data_size": 63488 00:23:02.050 } 00:23:02.050 ] 00:23:02.050 }' 00:23:02.050 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.050 15:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.308 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:02.308 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:02.308 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:02.308 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:02.308 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:02.308 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:02.308 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:02.308 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:02.566 [2024-07-23 15:16:57.928789] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.566 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:02.566 "name": "raid_bdev1", 00:23:02.566 "aliases": [ 00:23:02.566 "abaadd28-d644-4e09-883f-6fabd7d595bd" 00:23:02.566 ], 00:23:02.566 "product_name": "Raid Volume", 00:23:02.566 "block_size": 512, 00:23:02.566 "num_blocks": 63488, 00:23:02.566 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:02.566 "assigned_rate_limits": { 00:23:02.566 "rw_ios_per_sec": 0, 00:23:02.566 "rw_mbytes_per_sec": 0, 00:23:02.566 "r_mbytes_per_sec": 0, 00:23:02.566 "w_mbytes_per_sec": 0 00:23:02.566 }, 00:23:02.566 "claimed": false, 00:23:02.566 "zoned": false, 00:23:02.566 "supported_io_types": { 00:23:02.566 "read": true, 00:23:02.566 "write": true, 00:23:02.566 "unmap": false, 00:23:02.566 "flush": false, 00:23:02.566 "reset": true, 00:23:02.566 "nvme_admin": false, 00:23:02.566 "nvme_io": false, 00:23:02.566 "nvme_io_md": false, 00:23:02.566 "write_zeroes": true, 00:23:02.566 "zcopy": false, 00:23:02.566 "get_zone_info": false, 00:23:02.566 "zone_management": false, 00:23:02.566 "zone_append": false, 00:23:02.566 "compare": false, 00:23:02.566 "compare_and_write": false, 00:23:02.566 "abort": false, 00:23:02.566 "seek_hole": false, 00:23:02.566 "seek_data": false, 00:23:02.566 "copy": false, 00:23:02.566 "nvme_iov_md": false 00:23:02.566 }, 00:23:02.566 "memory_domains": [ 00:23:02.566 { 00:23:02.566 "dma_device_id": "system", 00:23:02.566 "dma_device_type": 1 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.566 "dma_device_type": 2 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "dma_device_id": "system", 00:23:02.566 "dma_device_type": 1 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.566 "dma_device_type": 2 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "dma_device_id": "system", 00:23:02.566 "dma_device_type": 1 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.566 "dma_device_type": 2 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "dma_device_id": "system", 00:23:02.566 "dma_device_type": 1 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.566 "dma_device_type": 2 00:23:02.566 } 00:23:02.566 ], 00:23:02.566 "driver_specific": { 00:23:02.566 "raid": { 00:23:02.566 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:02.566 "strip_size_kb": 0, 00:23:02.566 "state": "online", 00:23:02.566 "raid_level": "raid1", 00:23:02.566 "superblock": true, 00:23:02.566 "num_base_bdevs": 4, 00:23:02.566 "num_base_bdevs_discovered": 4, 00:23:02.566 "num_base_bdevs_operational": 4, 00:23:02.566 "base_bdevs_list": [ 00:23:02.566 { 00:23:02.566 "name": "pt1", 00:23:02.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:02.566 "is_configured": true, 00:23:02.566 "data_offset": 2048, 00:23:02.566 "data_size": 63488 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "name": "pt2", 00:23:02.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:02.566 "is_configured": true, 00:23:02.566 "data_offset": 2048, 00:23:02.566 "data_size": 63488 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "name": "pt3", 00:23:02.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:02.566 "is_configured": true, 00:23:02.566 "data_offset": 2048, 00:23:02.566 "data_size": 63488 00:23:02.566 }, 00:23:02.566 { 00:23:02.566 "name": "pt4", 00:23:02.566 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:02.566 "is_configured": true, 00:23:02.566 "data_offset": 2048, 00:23:02.566 "data_size": 63488 00:23:02.566 } 00:23:02.566 ] 00:23:02.566 } 00:23:02.566 } 00:23:02.566 }' 00:23:02.566 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:02.566 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:02.566 pt2 00:23:02.566 pt3 00:23:02.566 pt4' 00:23:02.566 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.566 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:02.566 15:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.825 "name": "pt1", 00:23:02.825 "aliases": [ 00:23:02.825 "00000000-0000-0000-0000-000000000001" 00:23:02.825 ], 00:23:02.825 "product_name": "passthru", 00:23:02.825 "block_size": 512, 00:23:02.825 "num_blocks": 65536, 00:23:02.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:02.825 "assigned_rate_limits": { 00:23:02.825 "rw_ios_per_sec": 0, 00:23:02.825 "rw_mbytes_per_sec": 0, 00:23:02.825 "r_mbytes_per_sec": 0, 00:23:02.825 "w_mbytes_per_sec": 0 00:23:02.825 }, 00:23:02.825 "claimed": true, 00:23:02.825 "claim_type": "exclusive_write", 00:23:02.825 "zoned": false, 00:23:02.825 "supported_io_types": { 00:23:02.825 "read": true, 00:23:02.825 "write": true, 00:23:02.825 "unmap": true, 00:23:02.825 "flush": true, 00:23:02.825 "reset": true, 00:23:02.825 "nvme_admin": false, 00:23:02.825 "nvme_io": false, 00:23:02.825 "nvme_io_md": false, 00:23:02.825 "write_zeroes": true, 00:23:02.825 "zcopy": true, 00:23:02.825 "get_zone_info": false, 00:23:02.825 "zone_management": false, 00:23:02.825 "zone_append": false, 00:23:02.825 "compare": false, 00:23:02.825 "compare_and_write": false, 00:23:02.825 "abort": true, 00:23:02.825 "seek_hole": false, 00:23:02.825 "seek_data": false, 00:23:02.825 "copy": true, 00:23:02.825 "nvme_iov_md": false 00:23:02.825 }, 00:23:02.825 "memory_domains": [ 00:23:02.825 { 00:23:02.825 "dma_device_id": "system", 00:23:02.825 "dma_device_type": 1 00:23:02.825 }, 00:23:02.825 { 00:23:02.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.825 "dma_device_type": 2 00:23:02.825 } 00:23:02.825 ], 00:23:02.825 "driver_specific": { 00:23:02.825 "passthru": { 00:23:02.825 "name": "pt1", 00:23:02.825 "base_bdev_name": "malloc1" 00:23:02.825 } 00:23:02.825 } 00:23:02.825 }' 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:02.825 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.391 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.391 "name": "pt2", 00:23:03.391 "aliases": [ 00:23:03.391 "00000000-0000-0000-0000-000000000002" 00:23:03.391 ], 00:23:03.391 "product_name": "passthru", 00:23:03.391 "block_size": 512, 00:23:03.391 "num_blocks": 65536, 00:23:03.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:03.391 "assigned_rate_limits": { 00:23:03.391 "rw_ios_per_sec": 0, 00:23:03.391 "rw_mbytes_per_sec": 0, 00:23:03.391 "r_mbytes_per_sec": 0, 00:23:03.391 "w_mbytes_per_sec": 0 00:23:03.391 }, 00:23:03.391 "claimed": true, 00:23:03.391 "claim_type": "exclusive_write", 00:23:03.391 "zoned": false, 00:23:03.391 "supported_io_types": { 00:23:03.391 "read": true, 00:23:03.391 "write": true, 00:23:03.391 "unmap": true, 00:23:03.391 "flush": true, 00:23:03.391 "reset": true, 00:23:03.391 "nvme_admin": false, 00:23:03.391 "nvme_io": false, 00:23:03.391 "nvme_io_md": false, 00:23:03.391 "write_zeroes": true, 00:23:03.391 "zcopy": true, 00:23:03.391 "get_zone_info": false, 00:23:03.391 "zone_management": false, 00:23:03.391 "zone_append": false, 00:23:03.391 "compare": false, 00:23:03.391 "compare_and_write": false, 00:23:03.391 "abort": true, 00:23:03.392 "seek_hole": false, 00:23:03.392 "seek_data": false, 00:23:03.392 "copy": true, 00:23:03.392 "nvme_iov_md": false 00:23:03.392 }, 00:23:03.392 "memory_domains": [ 00:23:03.392 { 00:23:03.392 "dma_device_id": "system", 00:23:03.392 "dma_device_type": 1 00:23:03.392 }, 00:23:03.392 { 00:23:03.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.392 "dma_device_type": 2 00:23:03.392 } 00:23:03.392 ], 00:23:03.392 "driver_specific": { 00:23:03.392 "passthru": { 00:23:03.392 "name": "pt2", 00:23:03.392 "base_bdev_name": "malloc2" 00:23:03.392 } 00:23:03.392 } 00:23:03.392 }' 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.392 "name": "pt3", 00:23:03.392 "aliases": [ 00:23:03.392 "00000000-0000-0000-0000-000000000003" 00:23:03.392 ], 00:23:03.392 "product_name": "passthru", 00:23:03.392 "block_size": 512, 00:23:03.392 "num_blocks": 65536, 00:23:03.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:03.392 "assigned_rate_limits": { 00:23:03.392 "rw_ios_per_sec": 0, 00:23:03.392 "rw_mbytes_per_sec": 0, 00:23:03.392 "r_mbytes_per_sec": 0, 00:23:03.392 "w_mbytes_per_sec": 0 00:23:03.392 }, 00:23:03.392 "claimed": true, 00:23:03.392 "claim_type": "exclusive_write", 00:23:03.392 "zoned": false, 00:23:03.392 "supported_io_types": { 00:23:03.392 "read": true, 00:23:03.392 "write": true, 00:23:03.392 "unmap": true, 00:23:03.392 "flush": true, 00:23:03.392 "reset": true, 00:23:03.392 "nvme_admin": false, 00:23:03.392 "nvme_io": false, 00:23:03.392 "nvme_io_md": false, 00:23:03.392 "write_zeroes": true, 00:23:03.392 "zcopy": true, 00:23:03.392 "get_zone_info": false, 00:23:03.392 "zone_management": false, 00:23:03.392 "zone_append": false, 00:23:03.392 "compare": false, 00:23:03.392 "compare_and_write": false, 00:23:03.392 "abort": true, 00:23:03.392 "seek_hole": false, 00:23:03.392 "seek_data": false, 00:23:03.392 "copy": true, 00:23:03.392 "nvme_iov_md": false 00:23:03.392 }, 00:23:03.392 "memory_domains": [ 00:23:03.392 { 00:23:03.392 "dma_device_id": "system", 00:23:03.392 "dma_device_type": 1 00:23:03.392 }, 00:23:03.392 { 00:23:03.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.392 "dma_device_type": 2 00:23:03.392 } 00:23:03.392 ], 00:23:03.392 "driver_specific": { 00:23:03.392 "passthru": { 00:23:03.392 "name": "pt3", 00:23:03.392 "base_bdev_name": "malloc3" 00:23:03.392 } 00:23:03.392 } 00:23:03.392 }' 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.392 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:03.650 15:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.909 "name": "pt4", 00:23:03.909 "aliases": [ 00:23:03.909 "00000000-0000-0000-0000-000000000004" 00:23:03.909 ], 00:23:03.909 "product_name": "passthru", 00:23:03.909 "block_size": 512, 00:23:03.909 "num_blocks": 65536, 00:23:03.909 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:03.909 "assigned_rate_limits": { 00:23:03.909 "rw_ios_per_sec": 0, 00:23:03.909 "rw_mbytes_per_sec": 0, 00:23:03.909 "r_mbytes_per_sec": 0, 00:23:03.909 "w_mbytes_per_sec": 0 00:23:03.909 }, 00:23:03.909 "claimed": true, 00:23:03.909 "claim_type": "exclusive_write", 00:23:03.909 "zoned": false, 00:23:03.909 "supported_io_types": { 00:23:03.909 "read": true, 00:23:03.909 "write": true, 00:23:03.909 "unmap": true, 00:23:03.909 "flush": true, 00:23:03.909 "reset": true, 00:23:03.909 "nvme_admin": false, 00:23:03.909 "nvme_io": false, 00:23:03.909 "nvme_io_md": false, 00:23:03.909 "write_zeroes": true, 00:23:03.909 "zcopy": true, 00:23:03.909 "get_zone_info": false, 00:23:03.909 "zone_management": false, 00:23:03.909 "zone_append": false, 00:23:03.909 "compare": false, 00:23:03.909 "compare_and_write": false, 00:23:03.909 "abort": true, 00:23:03.909 "seek_hole": false, 00:23:03.909 "seek_data": false, 00:23:03.909 "copy": true, 00:23:03.909 "nvme_iov_md": false 00:23:03.909 }, 00:23:03.909 "memory_domains": [ 00:23:03.909 { 00:23:03.909 "dma_device_id": "system", 00:23:03.909 "dma_device_type": 1 00:23:03.909 }, 00:23:03.909 { 00:23:03.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.909 "dma_device_type": 2 00:23:03.909 } 00:23:03.909 ], 00:23:03.909 "driver_specific": { 00:23:03.909 "passthru": { 00:23:03.909 "name": "pt4", 00:23:03.909 "base_bdev_name": "malloc4" 00:23:03.909 } 00:23:03.909 } 00:23:03.909 }' 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:03.909 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:04.167 [2024-07-23 15:16:59.461179] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:04.167 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=abaadd28-d644-4e09-883f-6fabd7d595bd 00:23:04.167 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z abaadd28-d644-4e09-883f-6fabd7d595bd ']' 00:23:04.167 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:04.426 [2024-07-23 15:16:59.728879] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:04.426 [2024-07-23 15:16:59.728926] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:04.426 [2024-07-23 15:16:59.729053] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.426 [2024-07-23 15:16:59.729157] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.426 [2024-07-23 15:16:59.729170] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:23:04.426 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.426 15:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:04.684 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:04.684 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:04.684 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:04.684 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:04.941 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:04.941 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:05.198 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:05.198 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:05.198 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:05.198 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:05.457 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:05.457 15:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:05.715 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:05.973 [2024-07-23 15:17:01.245217] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:05.973 [2024-07-23 15:17:01.247342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:05.973 [2024-07-23 15:17:01.247399] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:05.973 [2024-07-23 15:17:01.247430] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:05.973 [2024-07-23 15:17:01.247480] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:05.973 [2024-07-23 15:17:01.247529] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:05.973 [2024-07-23 15:17:01.247553] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:05.973 [2024-07-23 15:17:01.247572] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:05.973 [2024-07-23 15:17:01.247590] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:05.973 [2024-07-23 15:17:01.247601] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state configuring 00:23:05.973 request: 00:23:05.973 { 00:23:05.973 "name": "raid_bdev1", 00:23:05.973 "raid_level": "raid1", 00:23:05.973 "base_bdevs": [ 00:23:05.973 "malloc1", 00:23:05.973 "malloc2", 00:23:05.973 "malloc3", 00:23:05.973 "malloc4" 00:23:05.973 ], 00:23:05.973 "superblock": false, 00:23:05.973 "method": "bdev_raid_create", 00:23:05.973 "req_id": 1 00:23:05.973 } 00:23:05.973 Got JSON-RPC error response 00:23:05.973 response: 00:23:05.973 { 00:23:05.973 "code": -17, 00:23:05.973 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:05.973 } 00:23:05.973 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:23:05.973 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:05.973 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:05.973 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:05.973 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.973 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:06.231 [2024-07-23 15:17:01.601249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:06.231 [2024-07-23 15:17:01.601329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.231 [2024-07-23 15:17:01.601354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:23:06.231 [2024-07-23 15:17:01.601366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.231 [2024-07-23 15:17:01.603776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.231 [2024-07-23 15:17:01.603829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:06.231 [2024-07-23 15:17:01.603909] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:06.231 [2024-07-23 15:17:01.603970] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:06.231 pt1 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.231 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.489 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.489 "name": "raid_bdev1", 00:23:06.489 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:06.489 "strip_size_kb": 0, 00:23:06.489 "state": "configuring", 00:23:06.489 "raid_level": "raid1", 00:23:06.489 "superblock": true, 00:23:06.489 "num_base_bdevs": 4, 00:23:06.489 "num_base_bdevs_discovered": 1, 00:23:06.489 "num_base_bdevs_operational": 4, 00:23:06.489 "base_bdevs_list": [ 00:23:06.489 { 00:23:06.489 "name": "pt1", 00:23:06.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:06.489 "is_configured": true, 00:23:06.489 "data_offset": 2048, 00:23:06.489 "data_size": 63488 00:23:06.489 }, 00:23:06.489 { 00:23:06.489 "name": null, 00:23:06.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:06.489 "is_configured": false, 00:23:06.489 "data_offset": 2048, 00:23:06.489 "data_size": 63488 00:23:06.489 }, 00:23:06.489 { 00:23:06.489 "name": null, 00:23:06.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:06.489 "is_configured": false, 00:23:06.489 "data_offset": 2048, 00:23:06.489 "data_size": 63488 00:23:06.489 }, 00:23:06.489 { 00:23:06.489 "name": null, 00:23:06.489 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:06.489 "is_configured": false, 00:23:06.489 "data_offset": 2048, 00:23:06.489 "data_size": 63488 00:23:06.489 } 00:23:06.489 ] 00:23:06.489 }' 00:23:06.489 15:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.489 15:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.747 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:23:06.747 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:07.005 [2024-07-23 15:17:02.361407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:07.005 [2024-07-23 15:17:02.361486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.005 [2024-07-23 15:17:02.361513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:23:07.005 [2024-07-23 15:17:02.361525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.005 [2024-07-23 15:17:02.361990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.005 [2024-07-23 15:17:02.362017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:07.005 [2024-07-23 15:17:02.362093] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:07.005 [2024-07-23 15:17:02.362114] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:07.005 pt2 00:23:07.005 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:07.263 [2024-07-23 15:17:02.545487] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.263 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.521 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.521 "name": "raid_bdev1", 00:23:07.521 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:07.521 "strip_size_kb": 0, 00:23:07.521 "state": "configuring", 00:23:07.521 "raid_level": "raid1", 00:23:07.521 "superblock": true, 00:23:07.521 "num_base_bdevs": 4, 00:23:07.521 "num_base_bdevs_discovered": 1, 00:23:07.521 "num_base_bdevs_operational": 4, 00:23:07.521 "base_bdevs_list": [ 00:23:07.522 { 00:23:07.522 "name": "pt1", 00:23:07.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:07.522 "is_configured": true, 00:23:07.522 "data_offset": 2048, 00:23:07.522 "data_size": 63488 00:23:07.522 }, 00:23:07.522 { 00:23:07.522 "name": null, 00:23:07.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:07.522 "is_configured": false, 00:23:07.522 "data_offset": 2048, 00:23:07.522 "data_size": 63488 00:23:07.522 }, 00:23:07.522 { 00:23:07.522 "name": null, 00:23:07.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:07.522 "is_configured": false, 00:23:07.522 "data_offset": 2048, 00:23:07.522 "data_size": 63488 00:23:07.522 }, 00:23:07.522 { 00:23:07.522 "name": null, 00:23:07.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:07.522 "is_configured": false, 00:23:07.522 "data_offset": 2048, 00:23:07.522 "data_size": 63488 00:23:07.522 } 00:23:07.522 ] 00:23:07.522 }' 00:23:07.522 15:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.522 15:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.780 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:07.780 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:07.780 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:08.052 [2024-07-23 15:17:03.245601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:08.052 [2024-07-23 15:17:03.245675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.052 [2024-07-23 15:17:03.245699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:23:08.052 [2024-07-23 15:17:03.245720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.052 [2024-07-23 15:17:03.246166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.052 [2024-07-23 15:17:03.246205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:08.052 [2024-07-23 15:17:03.246279] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:08.052 [2024-07-23 15:17:03.246306] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:08.052 pt2 00:23:08.052 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:08.052 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:08.052 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:08.052 [2024-07-23 15:17:03.421646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:08.052 [2024-07-23 15:17:03.421726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.052 [2024-07-23 15:17:03.421750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:23:08.052 [2024-07-23 15:17:03.421766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.052 [2024-07-23 15:17:03.422206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.052 [2024-07-23 15:17:03.422237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:08.052 [2024-07-23 15:17:03.422311] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:08.052 [2024-07-23 15:17:03.422335] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:08.052 pt3 00:23:08.052 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:08.052 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:08.052 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:08.311 [2024-07-23 15:17:03.685689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:08.311 [2024-07-23 15:17:03.685784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.311 [2024-07-23 15:17:03.685822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:23:08.311 [2024-07-23 15:17:03.685842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.311 [2024-07-23 15:17:03.686283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.311 [2024-07-23 15:17:03.686314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:08.311 [2024-07-23 15:17:03.686387] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:08.311 [2024-07-23 15:17:03.686412] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:08.311 [2024-07-23 15:17:03.686532] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:23:08.311 [2024-07-23 15:17:03.686546] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:08.311 [2024-07-23 15:17:03.686616] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:23:08.311 [2024-07-23 15:17:03.686925] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:23:08.311 [2024-07-23 15:17:03.686946] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:23:08.311 [2024-07-23 15:17:03.687043] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.311 pt4 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.311 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.569 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.569 "name": "raid_bdev1", 00:23:08.569 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:08.569 "strip_size_kb": 0, 00:23:08.569 "state": "online", 00:23:08.569 "raid_level": "raid1", 00:23:08.569 "superblock": true, 00:23:08.569 "num_base_bdevs": 4, 00:23:08.569 "num_base_bdevs_discovered": 4, 00:23:08.569 "num_base_bdevs_operational": 4, 00:23:08.569 "base_bdevs_list": [ 00:23:08.569 { 00:23:08.569 "name": "pt1", 00:23:08.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:08.569 "is_configured": true, 00:23:08.569 "data_offset": 2048, 00:23:08.569 "data_size": 63488 00:23:08.569 }, 00:23:08.569 { 00:23:08.569 "name": "pt2", 00:23:08.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:08.569 "is_configured": true, 00:23:08.569 "data_offset": 2048, 00:23:08.569 "data_size": 63488 00:23:08.569 }, 00:23:08.569 { 00:23:08.569 "name": "pt3", 00:23:08.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:08.569 "is_configured": true, 00:23:08.569 "data_offset": 2048, 00:23:08.569 "data_size": 63488 00:23:08.569 }, 00:23:08.569 { 00:23:08.569 "name": "pt4", 00:23:08.569 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:08.569 "is_configured": true, 00:23:08.569 "data_offset": 2048, 00:23:08.569 "data_size": 63488 00:23:08.569 } 00:23:08.569 ] 00:23:08.569 }' 00:23:08.569 15:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.569 15:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.827 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:08.827 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:08.827 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:08.827 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:08.827 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:08.827 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:08.827 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:08.827 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:09.086 [2024-07-23 15:17:04.454162] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:09.086 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:09.086 "name": "raid_bdev1", 00:23:09.086 "aliases": [ 00:23:09.086 "abaadd28-d644-4e09-883f-6fabd7d595bd" 00:23:09.086 ], 00:23:09.086 "product_name": "Raid Volume", 00:23:09.086 "block_size": 512, 00:23:09.086 "num_blocks": 63488, 00:23:09.086 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:09.086 "assigned_rate_limits": { 00:23:09.086 "rw_ios_per_sec": 0, 00:23:09.086 "rw_mbytes_per_sec": 0, 00:23:09.086 "r_mbytes_per_sec": 0, 00:23:09.086 "w_mbytes_per_sec": 0 00:23:09.086 }, 00:23:09.086 "claimed": false, 00:23:09.086 "zoned": false, 00:23:09.086 "supported_io_types": { 00:23:09.086 "read": true, 00:23:09.086 "write": true, 00:23:09.086 "unmap": false, 00:23:09.086 "flush": false, 00:23:09.086 "reset": true, 00:23:09.086 "nvme_admin": false, 00:23:09.086 "nvme_io": false, 00:23:09.086 "nvme_io_md": false, 00:23:09.086 "write_zeroes": true, 00:23:09.086 "zcopy": false, 00:23:09.086 "get_zone_info": false, 00:23:09.086 "zone_management": false, 00:23:09.086 "zone_append": false, 00:23:09.086 "compare": false, 00:23:09.086 "compare_and_write": false, 00:23:09.086 "abort": false, 00:23:09.086 "seek_hole": false, 00:23:09.086 "seek_data": false, 00:23:09.086 "copy": false, 00:23:09.086 "nvme_iov_md": false 00:23:09.086 }, 00:23:09.086 "memory_domains": [ 00:23:09.086 { 00:23:09.086 "dma_device_id": "system", 00:23:09.086 "dma_device_type": 1 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.086 "dma_device_type": 2 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "dma_device_id": "system", 00:23:09.086 "dma_device_type": 1 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.086 "dma_device_type": 2 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "dma_device_id": "system", 00:23:09.086 "dma_device_type": 1 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.086 "dma_device_type": 2 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "dma_device_id": "system", 00:23:09.086 "dma_device_type": 1 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.086 "dma_device_type": 2 00:23:09.086 } 00:23:09.086 ], 00:23:09.086 "driver_specific": { 00:23:09.086 "raid": { 00:23:09.086 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:09.086 "strip_size_kb": 0, 00:23:09.086 "state": "online", 00:23:09.086 "raid_level": "raid1", 00:23:09.086 "superblock": true, 00:23:09.086 "num_base_bdevs": 4, 00:23:09.086 "num_base_bdevs_discovered": 4, 00:23:09.086 "num_base_bdevs_operational": 4, 00:23:09.086 "base_bdevs_list": [ 00:23:09.086 { 00:23:09.086 "name": "pt1", 00:23:09.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:09.086 "is_configured": true, 00:23:09.086 "data_offset": 2048, 00:23:09.086 "data_size": 63488 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "name": "pt2", 00:23:09.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.086 "is_configured": true, 00:23:09.086 "data_offset": 2048, 00:23:09.086 "data_size": 63488 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "name": "pt3", 00:23:09.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:09.086 "is_configured": true, 00:23:09.086 "data_offset": 2048, 00:23:09.086 "data_size": 63488 00:23:09.086 }, 00:23:09.086 { 00:23:09.086 "name": "pt4", 00:23:09.086 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:09.086 "is_configured": true, 00:23:09.086 "data_offset": 2048, 00:23:09.086 "data_size": 63488 00:23:09.086 } 00:23:09.086 ] 00:23:09.086 } 00:23:09.086 } 00:23:09.086 }' 00:23:09.086 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:09.086 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:09.086 pt2 00:23:09.086 pt3 00:23:09.086 pt4' 00:23:09.086 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:09.086 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:09.086 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:09.344 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:09.344 "name": "pt1", 00:23:09.344 "aliases": [ 00:23:09.344 "00000000-0000-0000-0000-000000000001" 00:23:09.344 ], 00:23:09.344 "product_name": "passthru", 00:23:09.344 "block_size": 512, 00:23:09.344 "num_blocks": 65536, 00:23:09.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:09.344 "assigned_rate_limits": { 00:23:09.344 "rw_ios_per_sec": 0, 00:23:09.344 "rw_mbytes_per_sec": 0, 00:23:09.344 "r_mbytes_per_sec": 0, 00:23:09.344 "w_mbytes_per_sec": 0 00:23:09.344 }, 00:23:09.344 "claimed": true, 00:23:09.344 "claim_type": "exclusive_write", 00:23:09.344 "zoned": false, 00:23:09.344 "supported_io_types": { 00:23:09.344 "read": true, 00:23:09.344 "write": true, 00:23:09.344 "unmap": true, 00:23:09.344 "flush": true, 00:23:09.344 "reset": true, 00:23:09.344 "nvme_admin": false, 00:23:09.344 "nvme_io": false, 00:23:09.344 "nvme_io_md": false, 00:23:09.344 "write_zeroes": true, 00:23:09.344 "zcopy": true, 00:23:09.344 "get_zone_info": false, 00:23:09.344 "zone_management": false, 00:23:09.344 "zone_append": false, 00:23:09.344 "compare": false, 00:23:09.344 "compare_and_write": false, 00:23:09.344 "abort": true, 00:23:09.344 "seek_hole": false, 00:23:09.344 "seek_data": false, 00:23:09.344 "copy": true, 00:23:09.344 "nvme_iov_md": false 00:23:09.344 }, 00:23:09.344 "memory_domains": [ 00:23:09.344 { 00:23:09.344 "dma_device_id": "system", 00:23:09.344 "dma_device_type": 1 00:23:09.344 }, 00:23:09.344 { 00:23:09.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.344 "dma_device_type": 2 00:23:09.344 } 00:23:09.344 ], 00:23:09.344 "driver_specific": { 00:23:09.344 "passthru": { 00:23:09.344 "name": "pt1", 00:23:09.344 "base_bdev_name": "malloc1" 00:23:09.344 } 00:23:09.344 } 00:23:09.344 }' 00:23:09.344 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:09.602 15:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:09.861 "name": "pt2", 00:23:09.861 "aliases": [ 00:23:09.861 "00000000-0000-0000-0000-000000000002" 00:23:09.861 ], 00:23:09.861 "product_name": "passthru", 00:23:09.861 "block_size": 512, 00:23:09.861 "num_blocks": 65536, 00:23:09.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.861 "assigned_rate_limits": { 00:23:09.861 "rw_ios_per_sec": 0, 00:23:09.861 "rw_mbytes_per_sec": 0, 00:23:09.861 "r_mbytes_per_sec": 0, 00:23:09.861 "w_mbytes_per_sec": 0 00:23:09.861 }, 00:23:09.861 "claimed": true, 00:23:09.861 "claim_type": "exclusive_write", 00:23:09.861 "zoned": false, 00:23:09.861 "supported_io_types": { 00:23:09.861 "read": true, 00:23:09.861 "write": true, 00:23:09.861 "unmap": true, 00:23:09.861 "flush": true, 00:23:09.861 "reset": true, 00:23:09.861 "nvme_admin": false, 00:23:09.861 "nvme_io": false, 00:23:09.861 "nvme_io_md": false, 00:23:09.861 "write_zeroes": true, 00:23:09.861 "zcopy": true, 00:23:09.861 "get_zone_info": false, 00:23:09.861 "zone_management": false, 00:23:09.861 "zone_append": false, 00:23:09.861 "compare": false, 00:23:09.861 "compare_and_write": false, 00:23:09.861 "abort": true, 00:23:09.861 "seek_hole": false, 00:23:09.861 "seek_data": false, 00:23:09.861 "copy": true, 00:23:09.861 "nvme_iov_md": false 00:23:09.861 }, 00:23:09.861 "memory_domains": [ 00:23:09.861 { 00:23:09.861 "dma_device_id": "system", 00:23:09.861 "dma_device_type": 1 00:23:09.861 }, 00:23:09.861 { 00:23:09.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.861 "dma_device_type": 2 00:23:09.861 } 00:23:09.861 ], 00:23:09.861 "driver_specific": { 00:23:09.861 "passthru": { 00:23:09.861 "name": "pt2", 00:23:09.861 "base_bdev_name": "malloc2" 00:23:09.861 } 00:23:09.861 } 00:23:09.861 }' 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:09.861 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:10.119 "name": "pt3", 00:23:10.119 "aliases": [ 00:23:10.119 "00000000-0000-0000-0000-000000000003" 00:23:10.119 ], 00:23:10.119 "product_name": "passthru", 00:23:10.119 "block_size": 512, 00:23:10.119 "num_blocks": 65536, 00:23:10.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:10.119 "assigned_rate_limits": { 00:23:10.119 "rw_ios_per_sec": 0, 00:23:10.119 "rw_mbytes_per_sec": 0, 00:23:10.119 "r_mbytes_per_sec": 0, 00:23:10.119 "w_mbytes_per_sec": 0 00:23:10.119 }, 00:23:10.119 "claimed": true, 00:23:10.119 "claim_type": "exclusive_write", 00:23:10.119 "zoned": false, 00:23:10.119 "supported_io_types": { 00:23:10.119 "read": true, 00:23:10.119 "write": true, 00:23:10.119 "unmap": true, 00:23:10.119 "flush": true, 00:23:10.119 "reset": true, 00:23:10.119 "nvme_admin": false, 00:23:10.119 "nvme_io": false, 00:23:10.119 "nvme_io_md": false, 00:23:10.119 "write_zeroes": true, 00:23:10.119 "zcopy": true, 00:23:10.119 "get_zone_info": false, 00:23:10.119 "zone_management": false, 00:23:10.119 "zone_append": false, 00:23:10.119 "compare": false, 00:23:10.119 "compare_and_write": false, 00:23:10.119 "abort": true, 00:23:10.119 "seek_hole": false, 00:23:10.119 "seek_data": false, 00:23:10.119 "copy": true, 00:23:10.119 "nvme_iov_md": false 00:23:10.119 }, 00:23:10.119 "memory_domains": [ 00:23:10.119 { 00:23:10.119 "dma_device_id": "system", 00:23:10.119 "dma_device_type": 1 00:23:10.119 }, 00:23:10.119 { 00:23:10.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.119 "dma_device_type": 2 00:23:10.119 } 00:23:10.119 ], 00:23:10.119 "driver_specific": { 00:23:10.119 "passthru": { 00:23:10.119 "name": "pt3", 00:23:10.119 "base_bdev_name": "malloc3" 00:23:10.119 } 00:23:10.119 } 00:23:10.119 }' 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.119 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:10.120 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.378 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.378 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:10.378 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:10.378 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:10.378 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:10.636 "name": "pt4", 00:23:10.636 "aliases": [ 00:23:10.636 "00000000-0000-0000-0000-000000000004" 00:23:10.636 ], 00:23:10.636 "product_name": "passthru", 00:23:10.636 "block_size": 512, 00:23:10.636 "num_blocks": 65536, 00:23:10.636 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:10.636 "assigned_rate_limits": { 00:23:10.636 "rw_ios_per_sec": 0, 00:23:10.636 "rw_mbytes_per_sec": 0, 00:23:10.636 "r_mbytes_per_sec": 0, 00:23:10.636 "w_mbytes_per_sec": 0 00:23:10.636 }, 00:23:10.636 "claimed": true, 00:23:10.636 "claim_type": "exclusive_write", 00:23:10.636 "zoned": false, 00:23:10.636 "supported_io_types": { 00:23:10.636 "read": true, 00:23:10.636 "write": true, 00:23:10.636 "unmap": true, 00:23:10.636 "flush": true, 00:23:10.636 "reset": true, 00:23:10.636 "nvme_admin": false, 00:23:10.636 "nvme_io": false, 00:23:10.636 "nvme_io_md": false, 00:23:10.636 "write_zeroes": true, 00:23:10.636 "zcopy": true, 00:23:10.636 "get_zone_info": false, 00:23:10.636 "zone_management": false, 00:23:10.636 "zone_append": false, 00:23:10.636 "compare": false, 00:23:10.636 "compare_and_write": false, 00:23:10.636 "abort": true, 00:23:10.636 "seek_hole": false, 00:23:10.636 "seek_data": false, 00:23:10.636 "copy": true, 00:23:10.636 "nvme_iov_md": false 00:23:10.636 }, 00:23:10.636 "memory_domains": [ 00:23:10.636 { 00:23:10.636 "dma_device_id": "system", 00:23:10.636 "dma_device_type": 1 00:23:10.636 }, 00:23:10.636 { 00:23:10.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.636 "dma_device_type": 2 00:23:10.636 } 00:23:10.636 ], 00:23:10.636 "driver_specific": { 00:23:10.636 "passthru": { 00:23:10.636 "name": "pt4", 00:23:10.636 "base_bdev_name": "malloc4" 00:23:10.636 } 00:23:10.636 } 00:23:10.636 }' 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:10.636 15:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:10.894 [2024-07-23 15:17:06.166573] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.894 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' abaadd28-d644-4e09-883f-6fabd7d595bd '!=' abaadd28-d644-4e09-883f-6fabd7d595bd ']' 00:23:10.895 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:10.895 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:10.895 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:10.895 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:11.151 [2024-07-23 15:17:06.350380] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.151 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.408 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.408 "name": "raid_bdev1", 00:23:11.408 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:11.408 "strip_size_kb": 0, 00:23:11.408 "state": "online", 00:23:11.408 "raid_level": "raid1", 00:23:11.408 "superblock": true, 00:23:11.408 "num_base_bdevs": 4, 00:23:11.408 "num_base_bdevs_discovered": 3, 00:23:11.408 "num_base_bdevs_operational": 3, 00:23:11.408 "base_bdevs_list": [ 00:23:11.408 { 00:23:11.408 "name": null, 00:23:11.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.408 "is_configured": false, 00:23:11.408 "data_offset": 2048, 00:23:11.408 "data_size": 63488 00:23:11.408 }, 00:23:11.408 { 00:23:11.408 "name": "pt2", 00:23:11.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:11.408 "is_configured": true, 00:23:11.408 "data_offset": 2048, 00:23:11.408 "data_size": 63488 00:23:11.408 }, 00:23:11.408 { 00:23:11.408 "name": "pt3", 00:23:11.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:11.408 "is_configured": true, 00:23:11.408 "data_offset": 2048, 00:23:11.408 "data_size": 63488 00:23:11.408 }, 00:23:11.408 { 00:23:11.408 "name": "pt4", 00:23:11.408 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:11.408 "is_configured": true, 00:23:11.408 "data_offset": 2048, 00:23:11.408 "data_size": 63488 00:23:11.408 } 00:23:11.408 ] 00:23:11.408 }' 00:23:11.408 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.408 15:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.666 15:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:11.923 [2024-07-23 15:17:07.138485] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.923 [2024-07-23 15:17:07.138700] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:11.923 [2024-07-23 15:17:07.138931] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.923 [2024-07-23 15:17:07.139117] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.923 [2024-07-23 15:17:07.139300] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:23:11.923 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:11.923 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.181 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:12.181 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:12.181 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:12.181 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:12.181 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:12.439 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:12.439 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:12.439 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:12.439 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:12.439 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:12.439 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:12.698 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:12.698 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:12.698 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:12.698 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:12.698 15:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:12.957 [2024-07-23 15:17:08.154643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:12.957 [2024-07-23 15:17:08.154723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.957 [2024-07-23 15:17:08.154746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:23:12.957 [2024-07-23 15:17:08.154761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.957 [2024-07-23 15:17:08.157228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.957 [2024-07-23 15:17:08.157272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:12.957 [2024-07-23 15:17:08.157344] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:12.957 [2024-07-23 15:17:08.157386] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:12.957 pt2 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.957 "name": "raid_bdev1", 00:23:12.957 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:12.957 "strip_size_kb": 0, 00:23:12.957 "state": "configuring", 00:23:12.957 "raid_level": "raid1", 00:23:12.957 "superblock": true, 00:23:12.957 "num_base_bdevs": 4, 00:23:12.957 "num_base_bdevs_discovered": 1, 00:23:12.957 "num_base_bdevs_operational": 3, 00:23:12.957 "base_bdevs_list": [ 00:23:12.957 { 00:23:12.957 "name": null, 00:23:12.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.957 "is_configured": false, 00:23:12.957 "data_offset": 2048, 00:23:12.957 "data_size": 63488 00:23:12.957 }, 00:23:12.957 { 00:23:12.957 "name": "pt2", 00:23:12.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:12.957 "is_configured": true, 00:23:12.957 "data_offset": 2048, 00:23:12.957 "data_size": 63488 00:23:12.957 }, 00:23:12.957 { 00:23:12.957 "name": null, 00:23:12.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:12.957 "is_configured": false, 00:23:12.957 "data_offset": 2048, 00:23:12.957 "data_size": 63488 00:23:12.957 }, 00:23:12.957 { 00:23:12.957 "name": null, 00:23:12.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:12.957 "is_configured": false, 00:23:12.957 "data_offset": 2048, 00:23:12.957 "data_size": 63488 00:23:12.957 } 00:23:12.957 ] 00:23:12.957 }' 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.957 15:17:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:13.524 [2024-07-23 15:17:08.850846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:13.524 [2024-07-23 15:17:08.850932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.524 [2024-07-23 15:17:08.850957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:23:13.524 [2024-07-23 15:17:08.850972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.524 [2024-07-23 15:17:08.851389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.524 [2024-07-23 15:17:08.851415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:13.524 [2024-07-23 15:17:08.851484] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:13.524 [2024-07-23 15:17:08.851518] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:13.524 pt3 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.524 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.525 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.525 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.525 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.525 15:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.783 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.783 "name": "raid_bdev1", 00:23:13.783 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:13.783 "strip_size_kb": 0, 00:23:13.783 "state": "configuring", 00:23:13.783 "raid_level": "raid1", 00:23:13.783 "superblock": true, 00:23:13.783 "num_base_bdevs": 4, 00:23:13.783 "num_base_bdevs_discovered": 2, 00:23:13.783 "num_base_bdevs_operational": 3, 00:23:13.783 "base_bdevs_list": [ 00:23:13.783 { 00:23:13.783 "name": null, 00:23:13.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.783 "is_configured": false, 00:23:13.783 "data_offset": 2048, 00:23:13.783 "data_size": 63488 00:23:13.783 }, 00:23:13.783 { 00:23:13.783 "name": "pt2", 00:23:13.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:13.783 "is_configured": true, 00:23:13.783 "data_offset": 2048, 00:23:13.783 "data_size": 63488 00:23:13.783 }, 00:23:13.783 { 00:23:13.783 "name": "pt3", 00:23:13.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:13.783 "is_configured": true, 00:23:13.783 "data_offset": 2048, 00:23:13.783 "data_size": 63488 00:23:13.783 }, 00:23:13.783 { 00:23:13.783 "name": null, 00:23:13.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:13.783 "is_configured": false, 00:23:13.783 "data_offset": 2048, 00:23:13.783 "data_size": 63488 00:23:13.783 } 00:23:13.783 ] 00:23:13.783 }' 00:23:13.783 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.783 15:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.041 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:23:14.041 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:14.042 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:23:14.042 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:14.300 [2024-07-23 15:17:09.486945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:14.300 [2024-07-23 15:17:09.487024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.300 [2024-07-23 15:17:09.487047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:23:14.300 [2024-07-23 15:17:09.487062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.300 [2024-07-23 15:17:09.487478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.300 [2024-07-23 15:17:09.487500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:14.300 [2024-07-23 15:17:09.487571] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:14.300 [2024-07-23 15:17:09.487597] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:14.300 [2024-07-23 15:17:09.487703] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:23:14.300 [2024-07-23 15:17:09.487715] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:14.300 [2024-07-23 15:17:09.487781] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:23:14.300 [2024-07-23 15:17:09.488102] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:23:14.300 [2024-07-23 15:17:09.488121] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:23:14.300 [2024-07-23 15:17:09.488221] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.300 pt4 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.300 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.300 "name": "raid_bdev1", 00:23:14.300 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:14.300 "strip_size_kb": 0, 00:23:14.300 "state": "online", 00:23:14.300 "raid_level": "raid1", 00:23:14.300 "superblock": true, 00:23:14.300 "num_base_bdevs": 4, 00:23:14.300 "num_base_bdevs_discovered": 3, 00:23:14.300 "num_base_bdevs_operational": 3, 00:23:14.300 "base_bdevs_list": [ 00:23:14.300 { 00:23:14.300 "name": null, 00:23:14.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.300 "is_configured": false, 00:23:14.300 "data_offset": 2048, 00:23:14.300 "data_size": 63488 00:23:14.300 }, 00:23:14.300 { 00:23:14.300 "name": "pt2", 00:23:14.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.300 "is_configured": true, 00:23:14.300 "data_offset": 2048, 00:23:14.300 "data_size": 63488 00:23:14.300 }, 00:23:14.300 { 00:23:14.300 "name": "pt3", 00:23:14.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:14.301 "is_configured": true, 00:23:14.301 "data_offset": 2048, 00:23:14.301 "data_size": 63488 00:23:14.301 }, 00:23:14.301 { 00:23:14.301 "name": "pt4", 00:23:14.301 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:14.301 "is_configured": true, 00:23:14.301 "data_offset": 2048, 00:23:14.301 "data_size": 63488 00:23:14.301 } 00:23:14.301 ] 00:23:14.301 }' 00:23:14.301 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.301 15:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.559 15:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:14.816 [2024-07-23 15:17:10.147098] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.816 [2024-07-23 15:17:10.147353] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.816 [2024-07-23 15:17:10.147440] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.817 [2024-07-23 15:17:10.147513] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.817 [2024-07-23 15:17:10.147525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:23:14.817 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.817 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:15.074 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:15.074 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:15.074 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:23:15.074 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:23:15.074 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:15.332 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:15.591 [2024-07-23 15:17:10.779239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:15.591 [2024-07-23 15:17:10.779321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.591 [2024-07-23 15:17:10.779350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:23:15.591 [2024-07-23 15:17:10.779363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.591 [2024-07-23 15:17:10.781729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.591 [2024-07-23 15:17:10.781773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:15.591 [2024-07-23 15:17:10.781858] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:15.591 [2024-07-23 15:17:10.781898] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:15.591 [2024-07-23 15:17:10.782009] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:15.591 [2024-07-23 15:17:10.782022] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:15.591 [2024-07-23 15:17:10.782052] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:23:15.591 [2024-07-23 15:17:10.782091] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:15.591 pt1 00:23:15.591 [2024-07-23 15:17:10.782193] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.591 "name": "raid_bdev1", 00:23:15.591 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:15.591 "strip_size_kb": 0, 00:23:15.591 "state": "configuring", 00:23:15.591 "raid_level": "raid1", 00:23:15.591 "superblock": true, 00:23:15.591 "num_base_bdevs": 4, 00:23:15.591 "num_base_bdevs_discovered": 2, 00:23:15.591 "num_base_bdevs_operational": 3, 00:23:15.591 "base_bdevs_list": [ 00:23:15.591 { 00:23:15.591 "name": null, 00:23:15.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.591 "is_configured": false, 00:23:15.591 "data_offset": 2048, 00:23:15.591 "data_size": 63488 00:23:15.591 }, 00:23:15.591 { 00:23:15.591 "name": "pt2", 00:23:15.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.591 "is_configured": true, 00:23:15.591 "data_offset": 2048, 00:23:15.591 "data_size": 63488 00:23:15.591 }, 00:23:15.591 { 00:23:15.591 "name": "pt3", 00:23:15.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:15.591 "is_configured": true, 00:23:15.591 "data_offset": 2048, 00:23:15.591 "data_size": 63488 00:23:15.591 }, 00:23:15.591 { 00:23:15.591 "name": null, 00:23:15.591 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:15.591 "is_configured": false, 00:23:15.591 "data_offset": 2048, 00:23:15.591 "data_size": 63488 00:23:15.591 } 00:23:15.591 ] 00:23:15.591 }' 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.591 15:17:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.157 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:16.157 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:16.157 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:23:16.157 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:16.415 [2024-07-23 15:17:11.703410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:16.415 [2024-07-23 15:17:11.703629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.415 [2024-07-23 15:17:11.703689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:23:16.416 [2024-07-23 15:17:11.703781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.416 [2024-07-23 15:17:11.704223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.416 [2024-07-23 15:17:11.704354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:16.416 [2024-07-23 15:17:11.704502] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:16.416 [2024-07-23 15:17:11.704603] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:16.416 [2024-07-23 15:17:11.704731] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:23:16.416 [2024-07-23 15:17:11.704746] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:16.416 [2024-07-23 15:17:11.704848] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:23:16.416 [2024-07-23 15:17:11.705146] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:23:16.416 [2024-07-23 15:17:11.705158] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:23:16.416 [2024-07-23 15:17:11.705255] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.416 pt4 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.416 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.674 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:16.674 "name": "raid_bdev1", 00:23:16.674 "uuid": "abaadd28-d644-4e09-883f-6fabd7d595bd", 00:23:16.674 "strip_size_kb": 0, 00:23:16.674 "state": "online", 00:23:16.674 "raid_level": "raid1", 00:23:16.674 "superblock": true, 00:23:16.674 "num_base_bdevs": 4, 00:23:16.674 "num_base_bdevs_discovered": 3, 00:23:16.674 "num_base_bdevs_operational": 3, 00:23:16.674 "base_bdevs_list": [ 00:23:16.674 { 00:23:16.674 "name": null, 00:23:16.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.674 "is_configured": false, 00:23:16.674 "data_offset": 2048, 00:23:16.674 "data_size": 63488 00:23:16.674 }, 00:23:16.674 { 00:23:16.674 "name": "pt2", 00:23:16.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.674 "is_configured": true, 00:23:16.674 "data_offset": 2048, 00:23:16.674 "data_size": 63488 00:23:16.674 }, 00:23:16.674 { 00:23:16.674 "name": "pt3", 00:23:16.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:16.674 "is_configured": true, 00:23:16.674 "data_offset": 2048, 00:23:16.674 "data_size": 63488 00:23:16.674 }, 00:23:16.674 { 00:23:16.674 "name": "pt4", 00:23:16.674 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:16.674 "is_configured": true, 00:23:16.674 "data_offset": 2048, 00:23:16.674 "data_size": 63488 00:23:16.674 } 00:23:16.674 ] 00:23:16.674 }' 00:23:16.674 15:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:16.674 15:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.984 15:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:16.984 15:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:17.242 15:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:17.242 15:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:17.242 15:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:17.242 [2024-07-23 15:17:12.663896] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' abaadd28-d644-4e09-883f-6fabd7d595bd '!=' abaadd28-d644-4e09-883f-6fabd7d595bd ']' 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 106253 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 106253 ']' 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 106253 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106253 00:23:17.500 killing process with pid 106253 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106253' 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 106253 00:23:17.500 [2024-07-23 15:17:12.716200] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.500 15:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 106253 00:23:17.500 [2024-07-23 15:17:12.716284] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.500 [2024-07-23 15:17:12.716355] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.500 [2024-07-23 15:17:12.716366] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:23:17.500 [2024-07-23 15:17:12.763061] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:17.758 15:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:17.758 00:23:17.758 real 0m18.646s 00:23:17.758 user 0m32.394s 00:23:17.758 sys 0m4.088s 00:23:17.758 15:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.758 15:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.758 ************************************ 00:23:17.758 END TEST raid_superblock_test 00:23:17.758 ************************************ 00:23:17.758 15:17:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:17.758 15:17:13 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:23:17.758 15:17:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:17.758 15:17:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.758 15:17:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.758 ************************************ 00:23:17.758 START TEST raid_read_error_test 00:23:17.758 ************************************ 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.jAY4oh9J6p 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=107000 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 107000 /var/tmp/spdk-raid.sock 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 107000 ']' 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.758 15:17:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.758 [2024-07-23 15:17:13.140566] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:23:17.758 [2024-07-23 15:17:13.140913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107000 ] 00:23:18.016 [2024-07-23 15:17:13.279030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.016 [2024-07-23 15:17:13.322719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.016 [2024-07-23 15:17:13.367184] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.949 15:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.949 15:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:18.949 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:18.949 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:18.949 BaseBdev1_malloc 00:23:18.949 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:18.949 true 00:23:18.949 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:19.207 [2024-07-23 15:17:14.522210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:19.207 [2024-07-23 15:17:14.522288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.207 [2024-07-23 15:17:14.522328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:23:19.207 [2024-07-23 15:17:14.522341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.207 [2024-07-23 15:17:14.524909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.207 [2024-07-23 15:17:14.524952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:19.207 BaseBdev1 00:23:19.207 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:19.207 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:19.465 BaseBdev2_malloc 00:23:19.465 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:19.465 true 00:23:19.723 15:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:19.723 [2024-07-23 15:17:15.059690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:19.723 [2024-07-23 15:17:15.059770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.723 [2024-07-23 15:17:15.059820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:23:19.723 [2024-07-23 15:17:15.059834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.723 [2024-07-23 15:17:15.062416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.723 [2024-07-23 15:17:15.062458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:19.723 BaseBdev2 00:23:19.723 15:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:19.723 15:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:19.981 BaseBdev3_malloc 00:23:19.981 15:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:20.239 true 00:23:20.239 15:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:20.497 [2024-07-23 15:17:15.669945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:20.497 [2024-07-23 15:17:15.670208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.497 [2024-07-23 15:17:15.670276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:23:20.497 [2024-07-23 15:17:15.670374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.497 [2024-07-23 15:17:15.672954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.497 [2024-07-23 15:17:15.673113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:20.497 BaseBdev3 00:23:20.497 15:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:20.497 15:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:20.497 BaseBdev4_malloc 00:23:20.497 15:17:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:20.755 true 00:23:20.755 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:21.013 [2024-07-23 15:17:16.275478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:21.013 [2024-07-23 15:17:16.275553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.013 [2024-07-23 15:17:16.275585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:23:21.013 [2024-07-23 15:17:16.275598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.013 [2024-07-23 15:17:16.278223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.013 [2024-07-23 15:17:16.278265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:21.013 BaseBdev4 00:23:21.013 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:21.272 [2024-07-23 15:17:16.451600] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.272 [2024-07-23 15:17:16.453785] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:21.272 [2024-07-23 15:17:16.453894] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.272 [2024-07-23 15:17:16.453950] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:21.272 [2024-07-23 15:17:16.454188] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:23:21.272 [2024-07-23 15:17:16.454201] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:21.272 [2024-07-23 15:17:16.454306] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:23:21.272 [2024-07-23 15:17:16.454658] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:23:21.272 [2024-07-23 15:17:16.454684] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:23:21.272 [2024-07-23 15:17:16.454830] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.272 "name": "raid_bdev1", 00:23:21.272 "uuid": "4d1e5c17-2e74-4796-87ef-f85f63683910", 00:23:21.272 "strip_size_kb": 0, 00:23:21.272 "state": "online", 00:23:21.272 "raid_level": "raid1", 00:23:21.272 "superblock": true, 00:23:21.272 "num_base_bdevs": 4, 00:23:21.272 "num_base_bdevs_discovered": 4, 00:23:21.272 "num_base_bdevs_operational": 4, 00:23:21.272 "base_bdevs_list": [ 00:23:21.272 { 00:23:21.272 "name": "BaseBdev1", 00:23:21.272 "uuid": "79c8da20-90a4-57d2-818f-7d6925c1dce1", 00:23:21.272 "is_configured": true, 00:23:21.272 "data_offset": 2048, 00:23:21.272 "data_size": 63488 00:23:21.272 }, 00:23:21.272 { 00:23:21.272 "name": "BaseBdev2", 00:23:21.272 "uuid": "03d50c73-a84b-58a7-8aee-b50a24a2260f", 00:23:21.272 "is_configured": true, 00:23:21.272 "data_offset": 2048, 00:23:21.272 "data_size": 63488 00:23:21.272 }, 00:23:21.272 { 00:23:21.272 "name": "BaseBdev3", 00:23:21.272 "uuid": "beaaf5e7-6dd9-5372-84fd-380d83351ec8", 00:23:21.272 "is_configured": true, 00:23:21.272 "data_offset": 2048, 00:23:21.272 "data_size": 63488 00:23:21.272 }, 00:23:21.272 { 00:23:21.272 "name": "BaseBdev4", 00:23:21.272 "uuid": "7a7a5b7a-e690-5225-a45f-d7052cb80ab3", 00:23:21.272 "is_configured": true, 00:23:21.272 "data_offset": 2048, 00:23:21.272 "data_size": 63488 00:23:21.272 } 00:23:21.272 ] 00:23:21.272 }' 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.272 15:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.838 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:21.838 15:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:21.838 [2024-07-23 15:17:17.044085] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:23:22.773 15:17:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:23.031 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.032 "name": "raid_bdev1", 00:23:23.032 "uuid": "4d1e5c17-2e74-4796-87ef-f85f63683910", 00:23:23.032 "strip_size_kb": 0, 00:23:23.032 "state": "online", 00:23:23.032 "raid_level": "raid1", 00:23:23.032 "superblock": true, 00:23:23.032 "num_base_bdevs": 4, 00:23:23.032 "num_base_bdevs_discovered": 4, 00:23:23.032 "num_base_bdevs_operational": 4, 00:23:23.032 "base_bdevs_list": [ 00:23:23.032 { 00:23:23.032 "name": "BaseBdev1", 00:23:23.032 "uuid": "79c8da20-90a4-57d2-818f-7d6925c1dce1", 00:23:23.032 "is_configured": true, 00:23:23.032 "data_offset": 2048, 00:23:23.032 "data_size": 63488 00:23:23.032 }, 00:23:23.032 { 00:23:23.032 "name": "BaseBdev2", 00:23:23.032 "uuid": "03d50c73-a84b-58a7-8aee-b50a24a2260f", 00:23:23.032 "is_configured": true, 00:23:23.032 "data_offset": 2048, 00:23:23.032 "data_size": 63488 00:23:23.032 }, 00:23:23.032 { 00:23:23.032 "name": "BaseBdev3", 00:23:23.032 "uuid": "beaaf5e7-6dd9-5372-84fd-380d83351ec8", 00:23:23.032 "is_configured": true, 00:23:23.032 "data_offset": 2048, 00:23:23.032 "data_size": 63488 00:23:23.032 }, 00:23:23.032 { 00:23:23.032 "name": "BaseBdev4", 00:23:23.032 "uuid": "7a7a5b7a-e690-5225-a45f-d7052cb80ab3", 00:23:23.032 "is_configured": true, 00:23:23.032 "data_offset": 2048, 00:23:23.032 "data_size": 63488 00:23:23.032 } 00:23:23.032 ] 00:23:23.032 }' 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.032 15:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.599 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:23.599 [2024-07-23 15:17:18.957658] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.599 [2024-07-23 15:17:18.957710] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.599 [2024-07-23 15:17:18.960121] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.599 [2024-07-23 15:17:18.960175] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.599 [2024-07-23 15:17:18.960300] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.599 [2024-07-23 15:17:18.960312] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:23:23.599 0 00:23:23.599 15:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 107000 00:23:23.599 15:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 107000 ']' 00:23:23.599 15:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 107000 00:23:23.599 15:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:23:23.599 15:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.599 15:17:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107000 00:23:23.599 killing process with pid 107000 00:23:23.599 15:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:23.599 15:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:23.599 15:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107000' 00:23:23.599 15:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 107000 00:23:23.599 [2024-07-23 15:17:19.016755] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.599 15:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 107000 00:23:23.857 [2024-07-23 15:17:19.052265] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:23.857 15:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.jAY4oh9J6p 00:23:23.857 15:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:23.857 15:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:24.115 15:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:24.115 15:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:24.115 15:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:24.115 15:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:24.115 15:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:24.115 00:23:24.115 real 0m6.218s 00:23:24.115 user 0m9.487s 00:23:24.115 sys 0m1.081s 00:23:24.115 15:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.115 15:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.115 ************************************ 00:23:24.115 END TEST raid_read_error_test 00:23:24.115 ************************************ 00:23:24.115 15:17:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:24.115 15:17:19 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:23:24.115 15:17:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:24.115 15:17:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.115 15:17:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:24.115 ************************************ 00:23:24.115 START TEST raid_write_error_test 00:23:24.115 ************************************ 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:24.115 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dZ2q5mznvS 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=107172 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 107172 /var/tmp/spdk-raid.sock 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 107172 ']' 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:24.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.116 15:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.116 [2024-07-23 15:17:19.442685] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:23:24.116 [2024-07-23 15:17:19.442894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107172 ] 00:23:24.374 [2024-07-23 15:17:19.596220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.374 [2024-07-23 15:17:19.639623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.374 [2024-07-23 15:17:19.684050] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.940 15:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.940 15:17:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:24.940 15:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:24.940 15:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:25.198 BaseBdev1_malloc 00:23:25.198 15:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:25.456 true 00:23:25.456 15:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:25.456 [2024-07-23 15:17:20.867428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:25.456 [2024-07-23 15:17:20.867528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.456 [2024-07-23 15:17:20.867563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005d80 00:23:25.456 [2024-07-23 15:17:20.867575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.456 [2024-07-23 15:17:20.870146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.456 [2024-07-23 15:17:20.870189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:25.456 BaseBdev1 00:23:25.456 15:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:25.456 15:17:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:25.715 BaseBdev2_malloc 00:23:25.715 15:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:25.974 true 00:23:25.974 15:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:26.233 [2024-07-23 15:17:21.464881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:26.233 [2024-07-23 15:17:21.464952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.233 [2024-07-23 15:17:21.464989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006c80 00:23:26.233 [2024-07-23 15:17:21.465002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.233 [2024-07-23 15:17:21.467492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.233 [2024-07-23 15:17:21.467535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:26.233 BaseBdev2 00:23:26.233 15:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:26.233 15:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:26.233 BaseBdev3_malloc 00:23:26.491 15:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:26.491 true 00:23:26.748 15:17:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:26.748 [2024-07-23 15:17:22.070225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:26.748 [2024-07-23 15:17:22.070303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.748 [2024-07-23 15:17:22.070333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:23:26.748 [2024-07-23 15:17:22.070345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.748 [2024-07-23 15:17:22.072986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.748 [2024-07-23 15:17:22.073028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:26.748 BaseBdev3 00:23:26.748 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:26.748 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:27.006 BaseBdev4_malloc 00:23:27.006 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:27.263 true 00:23:27.263 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:27.263 [2024-07-23 15:17:22.611646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:27.263 [2024-07-23 15:17:22.611717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.263 [2024-07-23 15:17:22.611764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:23:27.263 [2024-07-23 15:17:22.611776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.263 [2024-07-23 15:17:22.614227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.263 [2024-07-23 15:17:22.614270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:27.263 BaseBdev4 00:23:27.263 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:27.520 [2024-07-23 15:17:22.787757] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.520 [2024-07-23 15:17:22.789980] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:27.520 [2024-07-23 15:17:22.790071] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:27.520 [2024-07-23 15:17:22.790126] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:27.520 [2024-07-23 15:17:22.790361] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:23:27.520 [2024-07-23 15:17:22.790374] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:27.520 [2024-07-23 15:17:22.790495] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:23:27.520 [2024-07-23 15:17:22.790853] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:23:27.520 [2024-07-23 15:17:22.790870] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:23:27.520 [2024-07-23 15:17:22.791006] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.520 15:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.778 15:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.778 "name": "raid_bdev1", 00:23:27.778 "uuid": "e22dd227-7e05-4603-a927-19e81319416b", 00:23:27.778 "strip_size_kb": 0, 00:23:27.778 "state": "online", 00:23:27.778 "raid_level": "raid1", 00:23:27.778 "superblock": true, 00:23:27.778 "num_base_bdevs": 4, 00:23:27.779 "num_base_bdevs_discovered": 4, 00:23:27.779 "num_base_bdevs_operational": 4, 00:23:27.779 "base_bdevs_list": [ 00:23:27.779 { 00:23:27.779 "name": "BaseBdev1", 00:23:27.779 "uuid": "6601cde3-77df-507e-83ab-8ad8c9a4c326", 00:23:27.779 "is_configured": true, 00:23:27.779 "data_offset": 2048, 00:23:27.779 "data_size": 63488 00:23:27.779 }, 00:23:27.779 { 00:23:27.779 "name": "BaseBdev2", 00:23:27.779 "uuid": "a68553d8-4f9e-531c-9232-d5c08a8adb3e", 00:23:27.779 "is_configured": true, 00:23:27.779 "data_offset": 2048, 00:23:27.779 "data_size": 63488 00:23:27.779 }, 00:23:27.779 { 00:23:27.779 "name": "BaseBdev3", 00:23:27.779 "uuid": "10a3cc60-41aa-54f7-9d6d-f7f524523a0f", 00:23:27.779 "is_configured": true, 00:23:27.779 "data_offset": 2048, 00:23:27.779 "data_size": 63488 00:23:27.779 }, 00:23:27.779 { 00:23:27.779 "name": "BaseBdev4", 00:23:27.779 "uuid": "c95b37dd-7da1-5759-881b-22250b260d3c", 00:23:27.779 "is_configured": true, 00:23:27.779 "data_offset": 2048, 00:23:27.779 "data_size": 63488 00:23:27.779 } 00:23:27.779 ] 00:23:27.779 }' 00:23:27.779 15:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.779 15:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.037 15:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:28.037 15:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:28.037 [2024-07-23 15:17:23.404256] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:23:28.973 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:29.232 [2024-07-23 15:17:24.554183] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:23:29.232 [2024-07-23 15:17:24.554258] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:29.232 [2024-07-23 15:17:24.554498] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d0000022c0 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.232 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.491 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.491 "name": "raid_bdev1", 00:23:29.491 "uuid": "e22dd227-7e05-4603-a927-19e81319416b", 00:23:29.491 "strip_size_kb": 0, 00:23:29.491 "state": "online", 00:23:29.491 "raid_level": "raid1", 00:23:29.491 "superblock": true, 00:23:29.491 "num_base_bdevs": 4, 00:23:29.491 "num_base_bdevs_discovered": 3, 00:23:29.491 "num_base_bdevs_operational": 3, 00:23:29.491 "base_bdevs_list": [ 00:23:29.491 { 00:23:29.491 "name": null, 00:23:29.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.491 "is_configured": false, 00:23:29.491 "data_offset": 2048, 00:23:29.491 "data_size": 63488 00:23:29.491 }, 00:23:29.491 { 00:23:29.491 "name": "BaseBdev2", 00:23:29.491 "uuid": "a68553d8-4f9e-531c-9232-d5c08a8adb3e", 00:23:29.491 "is_configured": true, 00:23:29.491 "data_offset": 2048, 00:23:29.491 "data_size": 63488 00:23:29.491 }, 00:23:29.491 { 00:23:29.491 "name": "BaseBdev3", 00:23:29.491 "uuid": "10a3cc60-41aa-54f7-9d6d-f7f524523a0f", 00:23:29.491 "is_configured": true, 00:23:29.491 "data_offset": 2048, 00:23:29.491 "data_size": 63488 00:23:29.491 }, 00:23:29.491 { 00:23:29.491 "name": "BaseBdev4", 00:23:29.491 "uuid": "c95b37dd-7da1-5759-881b-22250b260d3c", 00:23:29.491 "is_configured": true, 00:23:29.491 "data_offset": 2048, 00:23:29.491 "data_size": 63488 00:23:29.491 } 00:23:29.491 ] 00:23:29.491 }' 00:23:29.491 15:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.491 15:17:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.771 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:30.064 [2024-07-23 15:17:25.285455] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.064 [2024-07-23 15:17:25.285724] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.064 [2024-07-23 15:17:25.288240] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.064 [2024-07-23 15:17:25.288291] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.064 [2024-07-23 15:17:25.288388] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.064 [2024-07-23 15:17:25.288403] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:23:30.064 0 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 107172 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 107172 ']' 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 107172 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107172 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107172' 00:23:30.064 killing process with pid 107172 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 107172 00:23:30.064 [2024-07-23 15:17:25.342833] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:30.064 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 107172 00:23:30.064 [2024-07-23 15:17:25.378349] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dZ2q5mznvS 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:30.323 ************************************ 00:23:30.323 END TEST raid_write_error_test 00:23:30.323 ************************************ 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:30.323 00:23:30.323 real 0m6.269s 00:23:30.323 user 0m9.543s 00:23:30.323 sys 0m1.124s 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:30.323 15:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.324 15:17:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:30.324 15:17:25 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:23:30.324 15:17:25 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:23:30.324 15:17:25 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:23:30.324 15:17:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:30.324 15:17:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.324 15:17:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:30.324 ************************************ 00:23:30.324 START TEST raid_rebuild_test 00:23:30.324 ************************************ 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:30.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=107342 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 107342 /var/tmp/spdk-raid.sock 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 107342 ']' 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.324 15:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:30.584 [2024-07-23 15:17:25.769303] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:23:30.584 [2024-07-23 15:17:25.769511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107342 ] 00:23:30.584 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:30.584 Zero copy mechanism will not be used. 00:23:30.584 [2024-07-23 15:17:25.919133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.584 [2024-07-23 15:17:25.965749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.584 [2024-07-23 15:17:26.010332] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:31.520 15:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.520 15:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:23:31.520 15:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:31.520 15:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:31.779 BaseBdev1_malloc 00:23:31.779 15:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:31.779 [2024-07-23 15:17:27.141755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:31.779 [2024-07-23 15:17:27.141867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.779 [2024-07-23 15:17:27.141911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:23:31.779 [2024-07-23 15:17:27.141924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.779 [2024-07-23 15:17:27.144492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.779 [2024-07-23 15:17:27.144542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:31.779 BaseBdev1 00:23:31.779 15:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:31.779 15:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:32.037 BaseBdev2_malloc 00:23:32.037 15:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:32.296 [2024-07-23 15:17:27.563340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:32.296 [2024-07-23 15:17:27.563413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.296 [2024-07-23 15:17:27.563443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:23:32.296 [2024-07-23 15:17:27.563456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.296 [2024-07-23 15:17:27.565927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.296 [2024-07-23 15:17:27.565967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:32.296 BaseBdev2 00:23:32.296 15:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:32.554 spare_malloc 00:23:32.554 15:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:32.813 spare_delay 00:23:32.813 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:32.813 [2024-07-23 15:17:28.157870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:32.813 [2024-07-23 15:17:28.157946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.813 [2024-07-23 15:17:28.157982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:23:32.813 [2024-07-23 15:17:28.157994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.813 [2024-07-23 15:17:28.160494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.813 [2024-07-23 15:17:28.160536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:32.813 spare 00:23:32.813 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:33.071 [2024-07-23 15:17:28.341990] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:33.071 [2024-07-23 15:17:28.344195] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:33.071 [2024-07-23 15:17:28.344300] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:23:33.071 [2024-07-23 15:17:28.344313] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:33.071 [2024-07-23 15:17:28.344450] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:23:33.071 [2024-07-23 15:17:28.344807] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:23:33.071 [2024-07-23 15:17:28.344833] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:23:33.071 [2024-07-23 15:17:28.344983] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.071 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.330 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:33.330 "name": "raid_bdev1", 00:23:33.330 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:33.330 "strip_size_kb": 0, 00:23:33.330 "state": "online", 00:23:33.330 "raid_level": "raid1", 00:23:33.330 "superblock": false, 00:23:33.330 "num_base_bdevs": 2, 00:23:33.330 "num_base_bdevs_discovered": 2, 00:23:33.330 "num_base_bdevs_operational": 2, 00:23:33.330 "base_bdevs_list": [ 00:23:33.330 { 00:23:33.330 "name": "BaseBdev1", 00:23:33.330 "uuid": "3394b8c3-12fa-58f7-a460-071642ebd307", 00:23:33.330 "is_configured": true, 00:23:33.330 "data_offset": 0, 00:23:33.330 "data_size": 65536 00:23:33.330 }, 00:23:33.330 { 00:23:33.330 "name": "BaseBdev2", 00:23:33.330 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:33.330 "is_configured": true, 00:23:33.330 "data_offset": 0, 00:23:33.330 "data_size": 65536 00:23:33.330 } 00:23:33.330 ] 00:23:33.330 }' 00:23:33.330 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:33.330 15:17:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.588 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:33.588 15:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:33.847 [2024-07-23 15:17:29.034359] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:33.847 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:34.107 [2024-07-23 15:17:29.462292] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:23:34.107 /dev/nbd0 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:34.107 1+0 records in 00:23:34.107 1+0 records out 00:23:34.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202362 s, 20.2 MB/s 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:34.107 15:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:23:39.376 65536+0 records in 00:23:39.376 65536+0 records out 00:23:39.376 33554432 bytes (34 MB, 32 MiB) copied, 4.89591 s, 6.9 MB/s 00:23:39.376 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:39.376 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.376 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:39.376 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:39.376 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:39.377 [2024-07-23 15:17:34.637093] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:39.377 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:39.635 [2024-07-23 15:17:34.825268] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.635 15:17:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.893 15:17:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.893 "name": "raid_bdev1", 00:23:39.893 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:39.893 "strip_size_kb": 0, 00:23:39.893 "state": "online", 00:23:39.893 "raid_level": "raid1", 00:23:39.893 "superblock": false, 00:23:39.893 "num_base_bdevs": 2, 00:23:39.893 "num_base_bdevs_discovered": 1, 00:23:39.893 "num_base_bdevs_operational": 1, 00:23:39.893 "base_bdevs_list": [ 00:23:39.893 { 00:23:39.893 "name": null, 00:23:39.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.893 "is_configured": false, 00:23:39.893 "data_offset": 0, 00:23:39.893 "data_size": 65536 00:23:39.893 }, 00:23:39.893 { 00:23:39.893 "name": "BaseBdev2", 00:23:39.893 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:39.893 "is_configured": true, 00:23:39.893 "data_offset": 0, 00:23:39.893 "data_size": 65536 00:23:39.893 } 00:23:39.893 ] 00:23:39.893 }' 00:23:39.893 15:17:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.893 15:17:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.152 15:17:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:40.152 [2024-07-23 15:17:35.505442] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.152 [2024-07-23 15:17:35.509804] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d05e10 00:23:40.152 [2024-07-23 15:17:35.511952] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:40.152 15:17:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:41.530 "name": "raid_bdev1", 00:23:41.530 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:41.530 "strip_size_kb": 0, 00:23:41.530 "state": "online", 00:23:41.530 "raid_level": "raid1", 00:23:41.530 "superblock": false, 00:23:41.530 "num_base_bdevs": 2, 00:23:41.530 "num_base_bdevs_discovered": 2, 00:23:41.530 "num_base_bdevs_operational": 2, 00:23:41.530 "process": { 00:23:41.530 "type": "rebuild", 00:23:41.530 "target": "spare", 00:23:41.530 "progress": { 00:23:41.530 "blocks": 24576, 00:23:41.530 "percent": 37 00:23:41.530 } 00:23:41.530 }, 00:23:41.530 "base_bdevs_list": [ 00:23:41.530 { 00:23:41.530 "name": "spare", 00:23:41.530 "uuid": "db361d42-63a6-5bcd-8fdc-cb11945433ff", 00:23:41.530 "is_configured": true, 00:23:41.530 "data_offset": 0, 00:23:41.530 "data_size": 65536 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "name": "BaseBdev2", 00:23:41.530 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:41.530 "is_configured": true, 00:23:41.530 "data_offset": 0, 00:23:41.530 "data_size": 65536 00:23:41.530 } 00:23:41.530 ] 00:23:41.530 }' 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.530 15:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:41.789 [2024-07-23 15:17:37.027132] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.789 [2024-07-23 15:17:37.122218] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:41.789 [2024-07-23 15:17:37.122290] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.789 [2024-07-23 15:17:37.122310] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.789 [2024-07-23 15:17:37.122319] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.789 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.047 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.047 "name": "raid_bdev1", 00:23:42.047 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:42.047 "strip_size_kb": 0, 00:23:42.047 "state": "online", 00:23:42.047 "raid_level": "raid1", 00:23:42.047 "superblock": false, 00:23:42.047 "num_base_bdevs": 2, 00:23:42.047 "num_base_bdevs_discovered": 1, 00:23:42.047 "num_base_bdevs_operational": 1, 00:23:42.047 "base_bdevs_list": [ 00:23:42.047 { 00:23:42.047 "name": null, 00:23:42.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.047 "is_configured": false, 00:23:42.047 "data_offset": 0, 00:23:42.047 "data_size": 65536 00:23:42.047 }, 00:23:42.047 { 00:23:42.047 "name": "BaseBdev2", 00:23:42.047 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:42.047 "is_configured": true, 00:23:42.047 "data_offset": 0, 00:23:42.047 "data_size": 65536 00:23:42.047 } 00:23:42.047 ] 00:23:42.047 }' 00:23:42.047 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.047 15:17:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.305 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.306 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:42.306 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:42.306 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:42.306 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.306 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.306 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.564 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.564 "name": "raid_bdev1", 00:23:42.564 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:42.564 "strip_size_kb": 0, 00:23:42.564 "state": "online", 00:23:42.564 "raid_level": "raid1", 00:23:42.564 "superblock": false, 00:23:42.564 "num_base_bdevs": 2, 00:23:42.564 "num_base_bdevs_discovered": 1, 00:23:42.564 "num_base_bdevs_operational": 1, 00:23:42.564 "base_bdevs_list": [ 00:23:42.564 { 00:23:42.564 "name": null, 00:23:42.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.564 "is_configured": false, 00:23:42.564 "data_offset": 0, 00:23:42.564 "data_size": 65536 00:23:42.564 }, 00:23:42.564 { 00:23:42.564 "name": "BaseBdev2", 00:23:42.564 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:42.564 "is_configured": true, 00:23:42.564 "data_offset": 0, 00:23:42.564 "data_size": 65536 00:23:42.564 } 00:23:42.564 ] 00:23:42.564 }' 00:23:42.564 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.564 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:42.564 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.564 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:42.564 15:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:42.823 [2024-07-23 15:17:38.039360] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.823 [2024-07-23 15:17:38.043689] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d05ee0 00:23:42.823 [2024-07-23 15:17:38.045945] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:42.823 15:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:43.760 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.760 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:43.760 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:43.760 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:43.760 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:43.760 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.760 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.018 "name": "raid_bdev1", 00:23:44.018 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:44.018 "strip_size_kb": 0, 00:23:44.018 "state": "online", 00:23:44.018 "raid_level": "raid1", 00:23:44.018 "superblock": false, 00:23:44.018 "num_base_bdevs": 2, 00:23:44.018 "num_base_bdevs_discovered": 2, 00:23:44.018 "num_base_bdevs_operational": 2, 00:23:44.018 "process": { 00:23:44.018 "type": "rebuild", 00:23:44.018 "target": "spare", 00:23:44.018 "progress": { 00:23:44.018 "blocks": 24576, 00:23:44.018 "percent": 37 00:23:44.018 } 00:23:44.018 }, 00:23:44.018 "base_bdevs_list": [ 00:23:44.018 { 00:23:44.018 "name": "spare", 00:23:44.018 "uuid": "db361d42-63a6-5bcd-8fdc-cb11945433ff", 00:23:44.018 "is_configured": true, 00:23:44.018 "data_offset": 0, 00:23:44.018 "data_size": 65536 00:23:44.018 }, 00:23:44.018 { 00:23:44.018 "name": "BaseBdev2", 00:23:44.018 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:44.018 "is_configured": true, 00:23:44.018 "data_offset": 0, 00:23:44.018 "data_size": 65536 00:23:44.018 } 00:23:44.018 ] 00:23:44.018 }' 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=579 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.018 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.276 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.276 "name": "raid_bdev1", 00:23:44.276 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:44.276 "strip_size_kb": 0, 00:23:44.276 "state": "online", 00:23:44.276 "raid_level": "raid1", 00:23:44.276 "superblock": false, 00:23:44.276 "num_base_bdevs": 2, 00:23:44.276 "num_base_bdevs_discovered": 2, 00:23:44.276 "num_base_bdevs_operational": 2, 00:23:44.276 "process": { 00:23:44.276 "type": "rebuild", 00:23:44.276 "target": "spare", 00:23:44.276 "progress": { 00:23:44.276 "blocks": 28672, 00:23:44.276 "percent": 43 00:23:44.276 } 00:23:44.276 }, 00:23:44.276 "base_bdevs_list": [ 00:23:44.276 { 00:23:44.276 "name": "spare", 00:23:44.276 "uuid": "db361d42-63a6-5bcd-8fdc-cb11945433ff", 00:23:44.276 "is_configured": true, 00:23:44.276 "data_offset": 0, 00:23:44.276 "data_size": 65536 00:23:44.276 }, 00:23:44.276 { 00:23:44.276 "name": "BaseBdev2", 00:23:44.276 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:44.276 "is_configured": true, 00:23:44.276 "data_offset": 0, 00:23:44.276 "data_size": 65536 00:23:44.276 } 00:23:44.276 ] 00:23:44.276 }' 00:23:44.276 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.276 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.276 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.276 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.276 15:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:45.211 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:45.211 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.211 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:45.211 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:45.211 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:45.211 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:45.211 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.211 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.470 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:45.470 "name": "raid_bdev1", 00:23:45.470 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:45.470 "strip_size_kb": 0, 00:23:45.470 "state": "online", 00:23:45.470 "raid_level": "raid1", 00:23:45.470 "superblock": false, 00:23:45.470 "num_base_bdevs": 2, 00:23:45.470 "num_base_bdevs_discovered": 2, 00:23:45.470 "num_base_bdevs_operational": 2, 00:23:45.470 "process": { 00:23:45.470 "type": "rebuild", 00:23:45.470 "target": "spare", 00:23:45.470 "progress": { 00:23:45.470 "blocks": 53248, 00:23:45.470 "percent": 81 00:23:45.470 } 00:23:45.470 }, 00:23:45.470 "base_bdevs_list": [ 00:23:45.470 { 00:23:45.470 "name": "spare", 00:23:45.470 "uuid": "db361d42-63a6-5bcd-8fdc-cb11945433ff", 00:23:45.470 "is_configured": true, 00:23:45.470 "data_offset": 0, 00:23:45.470 "data_size": 65536 00:23:45.470 }, 00:23:45.470 { 00:23:45.470 "name": "BaseBdev2", 00:23:45.470 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:45.470 "is_configured": true, 00:23:45.470 "data_offset": 0, 00:23:45.470 "data_size": 65536 00:23:45.470 } 00:23:45.470 ] 00:23:45.470 }' 00:23:45.470 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:45.470 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.470 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:45.470 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.470 15:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:46.037 [2024-07-23 15:17:41.264393] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:46.037 [2024-07-23 15:17:41.264487] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:46.037 [2024-07-23 15:17:41.264533] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.602 15:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:46.602 15:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.602 15:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:46.602 15:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:46.602 15:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:46.602 15:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:46.602 15:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.602 15:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.602 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:46.602 "name": "raid_bdev1", 00:23:46.602 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:46.602 "strip_size_kb": 0, 00:23:46.602 "state": "online", 00:23:46.602 "raid_level": "raid1", 00:23:46.602 "superblock": false, 00:23:46.602 "num_base_bdevs": 2, 00:23:46.602 "num_base_bdevs_discovered": 2, 00:23:46.602 "num_base_bdevs_operational": 2, 00:23:46.602 "base_bdevs_list": [ 00:23:46.602 { 00:23:46.602 "name": "spare", 00:23:46.602 "uuid": "db361d42-63a6-5bcd-8fdc-cb11945433ff", 00:23:46.602 "is_configured": true, 00:23:46.602 "data_offset": 0, 00:23:46.602 "data_size": 65536 00:23:46.602 }, 00:23:46.602 { 00:23:46.602 "name": "BaseBdev2", 00:23:46.602 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:46.602 "is_configured": true, 00:23:46.602 "data_offset": 0, 00:23:46.602 "data_size": 65536 00:23:46.602 } 00:23:46.602 ] 00:23:46.602 }' 00:23:46.602 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:46.602 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:46.602 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:46.861 "name": "raid_bdev1", 00:23:46.861 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:46.861 "strip_size_kb": 0, 00:23:46.861 "state": "online", 00:23:46.861 "raid_level": "raid1", 00:23:46.861 "superblock": false, 00:23:46.861 "num_base_bdevs": 2, 00:23:46.861 "num_base_bdevs_discovered": 2, 00:23:46.861 "num_base_bdevs_operational": 2, 00:23:46.861 "base_bdevs_list": [ 00:23:46.861 { 00:23:46.861 "name": "spare", 00:23:46.861 "uuid": "db361d42-63a6-5bcd-8fdc-cb11945433ff", 00:23:46.861 "is_configured": true, 00:23:46.861 "data_offset": 0, 00:23:46.861 "data_size": 65536 00:23:46.861 }, 00:23:46.861 { 00:23:46.861 "name": "BaseBdev2", 00:23:46.861 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:46.861 "is_configured": true, 00:23:46.861 "data_offset": 0, 00:23:46.861 "data_size": 65536 00:23:46.861 } 00:23:46.861 ] 00:23:46.861 }' 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.861 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.120 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.120 "name": "raid_bdev1", 00:23:47.120 "uuid": "1ad85933-e657-4a15-a123-122cb704a672", 00:23:47.120 "strip_size_kb": 0, 00:23:47.120 "state": "online", 00:23:47.120 "raid_level": "raid1", 00:23:47.120 "superblock": false, 00:23:47.120 "num_base_bdevs": 2, 00:23:47.120 "num_base_bdevs_discovered": 2, 00:23:47.120 "num_base_bdevs_operational": 2, 00:23:47.120 "base_bdevs_list": [ 00:23:47.120 { 00:23:47.120 "name": "spare", 00:23:47.120 "uuid": "db361d42-63a6-5bcd-8fdc-cb11945433ff", 00:23:47.120 "is_configured": true, 00:23:47.120 "data_offset": 0, 00:23:47.120 "data_size": 65536 00:23:47.120 }, 00:23:47.120 { 00:23:47.120 "name": "BaseBdev2", 00:23:47.120 "uuid": "5d8b26a4-3bfd-5515-8066-838207321548", 00:23:47.120 "is_configured": true, 00:23:47.120 "data_offset": 0, 00:23:47.120 "data_size": 65536 00:23:47.120 } 00:23:47.120 ] 00:23:47.120 }' 00:23:47.120 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.120 15:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.687 15:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:47.687 [2024-07-23 15:17:43.017872] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:47.687 [2024-07-23 15:17:43.017919] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:47.687 [2024-07-23 15:17:43.018009] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:47.687 [2024-07-23 15:17:43.018086] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:47.687 [2024-07-23 15:17:43.018103] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:23:47.687 15:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.687 15:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:23:47.945 15:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:47.945 15:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:47.946 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:48.213 /dev/nbd0 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:48.213 1+0 records in 00:23:48.213 1+0 records out 00:23:48.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236571 s, 17.3 MB/s 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:48.213 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:48.483 /dev/nbd1 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:48.484 1+0 records in 00:23:48.484 1+0 records out 00:23:48.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322889 s, 12.7 MB/s 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:48.484 15:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:48.742 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 107342 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 107342 ']' 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 107342 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107342 00:23:49.001 killing process with pid 107342 00:23:49.001 Received shutdown signal, test time was about 60.000000 seconds 00:23:49.001 00:23:49.001 Latency(us) 00:23:49.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.001 =================================================================================================================== 00:23:49.001 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.001 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107342' 00:23:49.002 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 107342 00:23:49.002 [2024-07-23 15:17:44.259443] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:49.002 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 107342 00:23:49.002 [2024-07-23 15:17:44.290427] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:23:49.261 00:23:49.261 real 0m18.840s 00:23:49.261 user 0m23.614s 00:23:49.261 sys 0m4.579s 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.261 ************************************ 00:23:49.261 END TEST raid_rebuild_test 00:23:49.261 ************************************ 00:23:49.261 15:17:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:49.261 15:17:44 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:23:49.261 15:17:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:49.261 15:17:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.261 15:17:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:49.261 ************************************ 00:23:49.261 START TEST raid_rebuild_test_sb 00:23:49.261 ************************************ 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:49.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=107827 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 107827 /var/tmp/spdk-raid.sock 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 107827 ']' 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.261 15:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.261 [2024-07-23 15:17:44.676997] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:23:49.261 [2024-07-23 15:17:44.677514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107827 ] 00:23:49.261 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:49.261 Zero copy mechanism will not be used. 00:23:49.520 [2024-07-23 15:17:44.831070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.520 [2024-07-23 15:17:44.878840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.520 [2024-07-23 15:17:44.923606] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:50.456 15:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.456 15:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:23:50.456 15:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:50.456 15:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:50.456 BaseBdev1_malloc 00:23:50.456 15:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:50.714 [2024-07-23 15:17:46.002846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:50.714 [2024-07-23 15:17:46.002925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.714 [2024-07-23 15:17:46.002963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:23:50.714 [2024-07-23 15:17:46.002983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.715 [2024-07-23 15:17:46.005660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.715 [2024-07-23 15:17:46.005815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:50.715 BaseBdev1 00:23:50.715 15:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:50.715 15:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:50.974 BaseBdev2_malloc 00:23:50.974 15:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:50.974 [2024-07-23 15:17:46.360379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:50.974 [2024-07-23 15:17:46.360452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.974 [2024-07-23 15:17:46.360483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:23:50.974 [2024-07-23 15:17:46.360495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.974 [2024-07-23 15:17:46.363075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.974 BaseBdev2 00:23:50.974 [2024-07-23 15:17:46.363224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:50.974 15:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:51.233 spare_malloc 00:23:51.233 15:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:51.492 spare_delay 00:23:51.492 15:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:51.751 [2024-07-23 15:17:46.991721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:51.751 [2024-07-23 15:17:46.991826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.751 [2024-07-23 15:17:46.991861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:23:51.751 [2024-07-23 15:17:46.991873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.751 [2024-07-23 15:17:46.994369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.751 [2024-07-23 15:17:46.994407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:51.751 spare 00:23:51.751 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:52.010 [2024-07-23 15:17:47.215868] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:52.010 [2024-07-23 15:17:47.218082] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:52.010 [2024-07-23 15:17:47.218281] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:23:52.010 [2024-07-23 15:17:47.218297] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:52.010 [2024-07-23 15:17:47.218451] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:23:52.010 [2024-07-23 15:17:47.218783] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:23:52.010 [2024-07-23 15:17:47.218822] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:23:52.010 [2024-07-23 15:17:47.218951] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.010 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.269 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.269 "name": "raid_bdev1", 00:23:52.269 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:23:52.269 "strip_size_kb": 0, 00:23:52.269 "state": "online", 00:23:52.269 "raid_level": "raid1", 00:23:52.269 "superblock": true, 00:23:52.269 "num_base_bdevs": 2, 00:23:52.269 "num_base_bdevs_discovered": 2, 00:23:52.269 "num_base_bdevs_operational": 2, 00:23:52.269 "base_bdevs_list": [ 00:23:52.269 { 00:23:52.269 "name": "BaseBdev1", 00:23:52.269 "uuid": "0e464100-0ac0-55e9-91d9-9bc18f177232", 00:23:52.269 "is_configured": true, 00:23:52.269 "data_offset": 2048, 00:23:52.269 "data_size": 63488 00:23:52.269 }, 00:23:52.269 { 00:23:52.269 "name": "BaseBdev2", 00:23:52.269 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:23:52.269 "is_configured": true, 00:23:52.269 "data_offset": 2048, 00:23:52.269 "data_size": 63488 00:23:52.269 } 00:23:52.269 ] 00:23:52.269 }' 00:23:52.269 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.269 15:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.527 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:52.527 15:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:52.786 [2024-07-23 15:17:48.068238] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.786 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:23:52.786 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.786 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.045 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:53.305 [2024-07-23 15:17:48.488135] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:23:53.305 /dev/nbd0 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:53.305 1+0 records in 00:23:53.305 1+0 records out 00:23:53.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376437 s, 10.9 MB/s 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:23:53.305 15:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:58.573 63488+0 records in 00:23:58.573 63488+0 records out 00:23:58.573 32505856 bytes (33 MB, 31 MiB) copied, 4.89822 s, 6.6 MB/s 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:58.573 [2024-07-23 15:17:53.710507] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:58.573 [2024-07-23 15:17:53.943953] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.573 15:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.831 15:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.831 "name": "raid_bdev1", 00:23:58.831 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:23:58.831 "strip_size_kb": 0, 00:23:58.831 "state": "online", 00:23:58.831 "raid_level": "raid1", 00:23:58.831 "superblock": true, 00:23:58.831 "num_base_bdevs": 2, 00:23:58.831 "num_base_bdevs_discovered": 1, 00:23:58.831 "num_base_bdevs_operational": 1, 00:23:58.831 "base_bdevs_list": [ 00:23:58.831 { 00:23:58.831 "name": null, 00:23:58.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.831 "is_configured": false, 00:23:58.831 "data_offset": 2048, 00:23:58.831 "data_size": 63488 00:23:58.831 }, 00:23:58.831 { 00:23:58.831 "name": "BaseBdev2", 00:23:58.831 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:23:58.831 "is_configured": true, 00:23:58.831 "data_offset": 2048, 00:23:58.831 "data_size": 63488 00:23:58.831 } 00:23:58.831 ] 00:23:58.831 }' 00:23:58.831 15:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.831 15:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.091 15:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:59.351 [2024-07-23 15:17:54.636093] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:59.351 [2024-07-23 15:17:54.640442] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000c8fdd0 00:23:59.351 [2024-07-23 15:17:54.642619] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:59.351 15:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:00.287 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.287 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:00.287 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:00.287 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:00.287 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:00.287 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.287 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.546 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:00.546 "name": "raid_bdev1", 00:24:00.546 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:00.546 "strip_size_kb": 0, 00:24:00.546 "state": "online", 00:24:00.546 "raid_level": "raid1", 00:24:00.546 "superblock": true, 00:24:00.546 "num_base_bdevs": 2, 00:24:00.546 "num_base_bdevs_discovered": 2, 00:24:00.546 "num_base_bdevs_operational": 2, 00:24:00.546 "process": { 00:24:00.546 "type": "rebuild", 00:24:00.546 "target": "spare", 00:24:00.546 "progress": { 00:24:00.546 "blocks": 24576, 00:24:00.546 "percent": 38 00:24:00.546 } 00:24:00.546 }, 00:24:00.546 "base_bdevs_list": [ 00:24:00.546 { 00:24:00.546 "name": "spare", 00:24:00.546 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:00.546 "is_configured": true, 00:24:00.546 "data_offset": 2048, 00:24:00.546 "data_size": 63488 00:24:00.546 }, 00:24:00.546 { 00:24:00.546 "name": "BaseBdev2", 00:24:00.546 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:00.546 "is_configured": true, 00:24:00.546 "data_offset": 2048, 00:24:00.546 "data_size": 63488 00:24:00.546 } 00:24:00.546 ] 00:24:00.546 }' 00:24:00.546 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:00.546 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.546 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:00.546 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.546 15:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:00.805 [2024-07-23 15:17:56.149879] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:00.805 [2024-07-23 15:17:56.152188] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:00.805 [2024-07-23 15:17:56.152254] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.805 [2024-07-23 15:17:56.152275] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:00.805 [2024-07-23 15:17:56.152290] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.805 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.064 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:01.064 "name": "raid_bdev1", 00:24:01.064 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:01.064 "strip_size_kb": 0, 00:24:01.064 "state": "online", 00:24:01.064 "raid_level": "raid1", 00:24:01.064 "superblock": true, 00:24:01.064 "num_base_bdevs": 2, 00:24:01.064 "num_base_bdevs_discovered": 1, 00:24:01.064 "num_base_bdevs_operational": 1, 00:24:01.064 "base_bdevs_list": [ 00:24:01.064 { 00:24:01.064 "name": null, 00:24:01.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.064 "is_configured": false, 00:24:01.064 "data_offset": 2048, 00:24:01.064 "data_size": 63488 00:24:01.064 }, 00:24:01.064 { 00:24:01.064 "name": "BaseBdev2", 00:24:01.064 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:01.064 "is_configured": true, 00:24:01.064 "data_offset": 2048, 00:24:01.064 "data_size": 63488 00:24:01.064 } 00:24:01.064 ] 00:24:01.064 }' 00:24:01.064 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:01.064 15:17:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.322 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:01.322 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:01.322 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:01.322 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:01.322 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:01.322 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.322 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.580 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:01.580 "name": "raid_bdev1", 00:24:01.580 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:01.580 "strip_size_kb": 0, 00:24:01.580 "state": "online", 00:24:01.580 "raid_level": "raid1", 00:24:01.580 "superblock": true, 00:24:01.580 "num_base_bdevs": 2, 00:24:01.580 "num_base_bdevs_discovered": 1, 00:24:01.580 "num_base_bdevs_operational": 1, 00:24:01.580 "base_bdevs_list": [ 00:24:01.580 { 00:24:01.580 "name": null, 00:24:01.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.580 "is_configured": false, 00:24:01.580 "data_offset": 2048, 00:24:01.580 "data_size": 63488 00:24:01.580 }, 00:24:01.580 { 00:24:01.580 "name": "BaseBdev2", 00:24:01.580 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:01.580 "is_configured": true, 00:24:01.580 "data_offset": 2048, 00:24:01.580 "data_size": 63488 00:24:01.580 } 00:24:01.580 ] 00:24:01.580 }' 00:24:01.580 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:01.580 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:01.580 15:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:01.839 15:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:01.840 15:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:01.840 [2024-07-23 15:17:57.241457] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.840 [2024-07-23 15:17:57.245865] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000c8fea0 00:24:01.840 [2024-07-23 15:17:57.248011] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:01.840 15:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:03.216 "name": "raid_bdev1", 00:24:03.216 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:03.216 "strip_size_kb": 0, 00:24:03.216 "state": "online", 00:24:03.216 "raid_level": "raid1", 00:24:03.216 "superblock": true, 00:24:03.216 "num_base_bdevs": 2, 00:24:03.216 "num_base_bdevs_discovered": 2, 00:24:03.216 "num_base_bdevs_operational": 2, 00:24:03.216 "process": { 00:24:03.216 "type": "rebuild", 00:24:03.216 "target": "spare", 00:24:03.216 "progress": { 00:24:03.216 "blocks": 24576, 00:24:03.216 "percent": 38 00:24:03.216 } 00:24:03.216 }, 00:24:03.216 "base_bdevs_list": [ 00:24:03.216 { 00:24:03.216 "name": "spare", 00:24:03.216 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:03.216 "is_configured": true, 00:24:03.216 "data_offset": 2048, 00:24:03.216 "data_size": 63488 00:24:03.216 }, 00:24:03.216 { 00:24:03.216 "name": "BaseBdev2", 00:24:03.216 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:03.216 "is_configured": true, 00:24:03.216 "data_offset": 2048, 00:24:03.216 "data_size": 63488 00:24:03.216 } 00:24:03.216 ] 00:24:03.216 }' 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:24:03.216 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=598 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.216 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.475 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:03.475 "name": "raid_bdev1", 00:24:03.475 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:03.475 "strip_size_kb": 0, 00:24:03.475 "state": "online", 00:24:03.475 "raid_level": "raid1", 00:24:03.475 "superblock": true, 00:24:03.475 "num_base_bdevs": 2, 00:24:03.475 "num_base_bdevs_discovered": 2, 00:24:03.475 "num_base_bdevs_operational": 2, 00:24:03.475 "process": { 00:24:03.475 "type": "rebuild", 00:24:03.475 "target": "spare", 00:24:03.475 "progress": { 00:24:03.475 "blocks": 28672, 00:24:03.475 "percent": 45 00:24:03.475 } 00:24:03.475 }, 00:24:03.475 "base_bdevs_list": [ 00:24:03.475 { 00:24:03.475 "name": "spare", 00:24:03.475 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:03.475 "is_configured": true, 00:24:03.475 "data_offset": 2048, 00:24:03.475 "data_size": 63488 00:24:03.475 }, 00:24:03.475 { 00:24:03.475 "name": "BaseBdev2", 00:24:03.475 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:03.475 "is_configured": true, 00:24:03.475 "data_offset": 2048, 00:24:03.475 "data_size": 63488 00:24:03.475 } 00:24:03.475 ] 00:24:03.475 }' 00:24:03.475 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:03.475 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.475 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:03.475 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.475 15:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:04.410 15:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:04.410 15:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.410 15:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:04.410 15:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:04.410 15:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:04.410 15:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:04.410 15:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.410 15:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.669 15:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:04.669 "name": "raid_bdev1", 00:24:04.669 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:04.669 "strip_size_kb": 0, 00:24:04.669 "state": "online", 00:24:04.669 "raid_level": "raid1", 00:24:04.669 "superblock": true, 00:24:04.669 "num_base_bdevs": 2, 00:24:04.669 "num_base_bdevs_discovered": 2, 00:24:04.669 "num_base_bdevs_operational": 2, 00:24:04.669 "process": { 00:24:04.669 "type": "rebuild", 00:24:04.669 "target": "spare", 00:24:04.669 "progress": { 00:24:04.669 "blocks": 55296, 00:24:04.669 "percent": 87 00:24:04.669 } 00:24:04.669 }, 00:24:04.669 "base_bdevs_list": [ 00:24:04.669 { 00:24:04.669 "name": "spare", 00:24:04.669 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:04.669 "is_configured": true, 00:24:04.669 "data_offset": 2048, 00:24:04.669 "data_size": 63488 00:24:04.669 }, 00:24:04.669 { 00:24:04.669 "name": "BaseBdev2", 00:24:04.669 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:04.669 "is_configured": true, 00:24:04.669 "data_offset": 2048, 00:24:04.669 "data_size": 63488 00:24:04.669 } 00:24:04.669 ] 00:24:04.669 }' 00:24:04.669 15:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:04.669 15:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.669 15:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:04.669 15:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.669 15:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:05.237 [2024-07-23 15:18:00.366030] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:05.237 [2024-07-23 15:18:00.366119] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:05.237 [2024-07-23 15:18:00.366220] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.804 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:05.804 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.804 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:05.804 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:05.804 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:05.804 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:05.804 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.804 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:06.063 "name": "raid_bdev1", 00:24:06.063 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:06.063 "strip_size_kb": 0, 00:24:06.063 "state": "online", 00:24:06.063 "raid_level": "raid1", 00:24:06.063 "superblock": true, 00:24:06.063 "num_base_bdevs": 2, 00:24:06.063 "num_base_bdevs_discovered": 2, 00:24:06.063 "num_base_bdevs_operational": 2, 00:24:06.063 "base_bdevs_list": [ 00:24:06.063 { 00:24:06.063 "name": "spare", 00:24:06.063 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:06.063 "is_configured": true, 00:24:06.063 "data_offset": 2048, 00:24:06.063 "data_size": 63488 00:24:06.063 }, 00:24:06.063 { 00:24:06.063 "name": "BaseBdev2", 00:24:06.063 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:06.063 "is_configured": true, 00:24:06.063 "data_offset": 2048, 00:24:06.063 "data_size": 63488 00:24:06.063 } 00:24:06.063 ] 00:24:06.063 }' 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.063 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:06.321 "name": "raid_bdev1", 00:24:06.321 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:06.321 "strip_size_kb": 0, 00:24:06.321 "state": "online", 00:24:06.321 "raid_level": "raid1", 00:24:06.321 "superblock": true, 00:24:06.321 "num_base_bdevs": 2, 00:24:06.321 "num_base_bdevs_discovered": 2, 00:24:06.321 "num_base_bdevs_operational": 2, 00:24:06.321 "base_bdevs_list": [ 00:24:06.321 { 00:24:06.321 "name": "spare", 00:24:06.321 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:06.321 "is_configured": true, 00:24:06.321 "data_offset": 2048, 00:24:06.321 "data_size": 63488 00:24:06.321 }, 00:24:06.321 { 00:24:06.321 "name": "BaseBdev2", 00:24:06.321 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:06.321 "is_configured": true, 00:24:06.321 "data_offset": 2048, 00:24:06.321 "data_size": 63488 00:24:06.321 } 00:24:06.321 ] 00:24:06.321 }' 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.321 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.579 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.579 "name": "raid_bdev1", 00:24:06.579 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:06.579 "strip_size_kb": 0, 00:24:06.579 "state": "online", 00:24:06.579 "raid_level": "raid1", 00:24:06.579 "superblock": true, 00:24:06.579 "num_base_bdevs": 2, 00:24:06.579 "num_base_bdevs_discovered": 2, 00:24:06.579 "num_base_bdevs_operational": 2, 00:24:06.579 "base_bdevs_list": [ 00:24:06.579 { 00:24:06.579 "name": "spare", 00:24:06.579 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:06.579 "is_configured": true, 00:24:06.579 "data_offset": 2048, 00:24:06.579 "data_size": 63488 00:24:06.579 }, 00:24:06.579 { 00:24:06.579 "name": "BaseBdev2", 00:24:06.579 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:06.579 "is_configured": true, 00:24:06.579 "data_offset": 2048, 00:24:06.579 "data_size": 63488 00:24:06.579 } 00:24:06.579 ] 00:24:06.579 }' 00:24:06.579 15:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.579 15:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.837 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:07.095 [2024-07-23 15:18:02.459589] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:07.095 [2024-07-23 15:18:02.459632] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:07.095 [2024-07-23 15:18:02.459736] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:07.095 [2024-07-23 15:18:02.459821] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:07.095 [2024-07-23 15:18:02.459838] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:24:07.095 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:24:07.095 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:07.354 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:07.622 /dev/nbd0 00:24:07.622 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:07.622 15:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:07.622 15:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:07.622 15:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:24:07.622 15:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:07.622 15:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:07.622 15:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:07.622 1+0 records in 00:24:07.622 1+0 records out 00:24:07.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00349496 s, 1.2 MB/s 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:07.622 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:07.881 /dev/nbd1 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:07.881 1+0 records in 00:24:07.881 1+0 records out 00:24:07.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384493 s, 10.7 MB/s 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:07.881 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:08.139 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:08.397 15:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:08.655 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:08.913 [2024-07-23 15:18:04.271075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:08.913 [2024-07-23 15:18:04.271169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.913 [2024-07-23 15:18:04.271201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:24:08.913 [2024-07-23 15:18:04.271216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.913 [2024-07-23 15:18:04.273717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.913 [2024-07-23 15:18:04.273765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:08.913 [2024-07-23 15:18:04.273862] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:08.913 [2024-07-23 15:18:04.273910] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:08.913 [2024-07-23 15:18:04.274063] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:08.913 spare 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.913 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.193 [2024-07-23 15:18:04.374174] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:24:09.193 [2024-07-23 15:18:04.374223] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:09.193 [2024-07-23 15:18:04.374399] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cae550 00:24:09.193 [2024-07-23 15:18:04.374820] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:24:09.193 [2024-07-23 15:18:04.374847] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:24:09.193 [2024-07-23 15:18:04.374981] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.193 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.193 "name": "raid_bdev1", 00:24:09.193 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:09.193 "strip_size_kb": 0, 00:24:09.193 "state": "online", 00:24:09.193 "raid_level": "raid1", 00:24:09.193 "superblock": true, 00:24:09.193 "num_base_bdevs": 2, 00:24:09.193 "num_base_bdevs_discovered": 2, 00:24:09.193 "num_base_bdevs_operational": 2, 00:24:09.193 "base_bdevs_list": [ 00:24:09.193 { 00:24:09.193 "name": "spare", 00:24:09.193 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:09.193 "is_configured": true, 00:24:09.193 "data_offset": 2048, 00:24:09.193 "data_size": 63488 00:24:09.193 }, 00:24:09.193 { 00:24:09.193 "name": "BaseBdev2", 00:24:09.193 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:09.193 "is_configured": true, 00:24:09.193 "data_offset": 2048, 00:24:09.193 "data_size": 63488 00:24:09.193 } 00:24:09.193 ] 00:24:09.193 }' 00:24:09.193 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.193 15:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.452 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:09.452 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:09.452 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:09.452 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:09.452 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:09.452 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.452 15:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.710 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.710 "name": "raid_bdev1", 00:24:09.710 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:09.710 "strip_size_kb": 0, 00:24:09.710 "state": "online", 00:24:09.710 "raid_level": "raid1", 00:24:09.710 "superblock": true, 00:24:09.710 "num_base_bdevs": 2, 00:24:09.710 "num_base_bdevs_discovered": 2, 00:24:09.710 "num_base_bdevs_operational": 2, 00:24:09.710 "base_bdevs_list": [ 00:24:09.710 { 00:24:09.710 "name": "spare", 00:24:09.710 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:09.710 "is_configured": true, 00:24:09.710 "data_offset": 2048, 00:24:09.710 "data_size": 63488 00:24:09.710 }, 00:24:09.710 { 00:24:09.710 "name": "BaseBdev2", 00:24:09.710 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:09.710 "is_configured": true, 00:24:09.711 "data_offset": 2048, 00:24:09.711 "data_size": 63488 00:24:09.711 } 00:24:09.711 ] 00:24:09.711 }' 00:24:09.711 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.711 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:09.711 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:09.711 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:09.711 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.711 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:09.968 [2024-07-23 15:18:05.367365] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.968 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.226 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:10.226 "name": "raid_bdev1", 00:24:10.226 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:10.226 "strip_size_kb": 0, 00:24:10.226 "state": "online", 00:24:10.226 "raid_level": "raid1", 00:24:10.226 "superblock": true, 00:24:10.226 "num_base_bdevs": 2, 00:24:10.226 "num_base_bdevs_discovered": 1, 00:24:10.226 "num_base_bdevs_operational": 1, 00:24:10.226 "base_bdevs_list": [ 00:24:10.226 { 00:24:10.226 "name": null, 00:24:10.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.226 "is_configured": false, 00:24:10.226 "data_offset": 2048, 00:24:10.226 "data_size": 63488 00:24:10.226 }, 00:24:10.226 { 00:24:10.226 "name": "BaseBdev2", 00:24:10.226 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:10.226 "is_configured": true, 00:24:10.226 "data_offset": 2048, 00:24:10.226 "data_size": 63488 00:24:10.226 } 00:24:10.226 ] 00:24:10.226 }' 00:24:10.226 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:10.226 15:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.483 15:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:10.741 [2024-07-23 15:18:06.139556] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:10.741 [2024-07-23 15:18:06.139770] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:10.741 [2024-07-23 15:18:06.139807] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:10.741 [2024-07-23 15:18:06.139865] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:10.741 [2024-07-23 15:18:06.144094] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cae620 00:24:10.742 [2024-07-23 15:18:06.146288] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:10.742 15:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:12.116 "name": "raid_bdev1", 00:24:12.116 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:12.116 "strip_size_kb": 0, 00:24:12.116 "state": "online", 00:24:12.116 "raid_level": "raid1", 00:24:12.116 "superblock": true, 00:24:12.116 "num_base_bdevs": 2, 00:24:12.116 "num_base_bdevs_discovered": 2, 00:24:12.116 "num_base_bdevs_operational": 2, 00:24:12.116 "process": { 00:24:12.116 "type": "rebuild", 00:24:12.116 "target": "spare", 00:24:12.116 "progress": { 00:24:12.116 "blocks": 22528, 00:24:12.116 "percent": 35 00:24:12.116 } 00:24:12.116 }, 00:24:12.116 "base_bdevs_list": [ 00:24:12.116 { 00:24:12.116 "name": "spare", 00:24:12.116 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:12.116 "is_configured": true, 00:24:12.116 "data_offset": 2048, 00:24:12.116 "data_size": 63488 00:24:12.116 }, 00:24:12.116 { 00:24:12.116 "name": "BaseBdev2", 00:24:12.116 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:12.116 "is_configured": true, 00:24:12.116 "data_offset": 2048, 00:24:12.116 "data_size": 63488 00:24:12.116 } 00:24:12.116 ] 00:24:12.116 }' 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:12.116 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:12.374 [2024-07-23 15:18:07.581859] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:12.374 [2024-07-23 15:18:07.654890] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:12.374 [2024-07-23 15:18:07.654962] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.374 [2024-07-23 15:18:07.654983] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:12.374 [2024-07-23 15:18:07.654992] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.374 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.633 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.633 "name": "raid_bdev1", 00:24:12.633 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:12.633 "strip_size_kb": 0, 00:24:12.633 "state": "online", 00:24:12.633 "raid_level": "raid1", 00:24:12.633 "superblock": true, 00:24:12.633 "num_base_bdevs": 2, 00:24:12.633 "num_base_bdevs_discovered": 1, 00:24:12.633 "num_base_bdevs_operational": 1, 00:24:12.633 "base_bdevs_list": [ 00:24:12.633 { 00:24:12.633 "name": null, 00:24:12.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.633 "is_configured": false, 00:24:12.633 "data_offset": 2048, 00:24:12.633 "data_size": 63488 00:24:12.633 }, 00:24:12.633 { 00:24:12.633 "name": "BaseBdev2", 00:24:12.633 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:12.633 "is_configured": true, 00:24:12.633 "data_offset": 2048, 00:24:12.633 "data_size": 63488 00:24:12.633 } 00:24:12.633 ] 00:24:12.633 }' 00:24:12.633 15:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.633 15:18:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.891 15:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:13.150 [2024-07-23 15:18:08.424029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:13.150 [2024-07-23 15:18:08.424106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.150 [2024-07-23 15:18:08.424142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:24:13.150 [2024-07-23 15:18:08.424155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.150 [2024-07-23 15:18:08.424675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.150 [2024-07-23 15:18:08.424706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:13.150 [2024-07-23 15:18:08.424801] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:13.150 [2024-07-23 15:18:08.424815] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:13.150 [2024-07-23 15:18:08.424830] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:13.150 [2024-07-23 15:18:08.424867] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.150 [2024-07-23 15:18:08.429063] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cae6f0 00:24:13.150 spare 00:24:13.150 [2024-07-23 15:18:08.431182] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:13.150 15:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:24:14.081 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.081 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:14.081 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:14.081 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:14.081 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:14.081 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.081 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.339 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:14.339 "name": "raid_bdev1", 00:24:14.339 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:14.339 "strip_size_kb": 0, 00:24:14.339 "state": "online", 00:24:14.339 "raid_level": "raid1", 00:24:14.339 "superblock": true, 00:24:14.339 "num_base_bdevs": 2, 00:24:14.339 "num_base_bdevs_discovered": 2, 00:24:14.339 "num_base_bdevs_operational": 2, 00:24:14.339 "process": { 00:24:14.339 "type": "rebuild", 00:24:14.339 "target": "spare", 00:24:14.339 "progress": { 00:24:14.339 "blocks": 24576, 00:24:14.339 "percent": 38 00:24:14.339 } 00:24:14.339 }, 00:24:14.339 "base_bdevs_list": [ 00:24:14.339 { 00:24:14.339 "name": "spare", 00:24:14.339 "uuid": "dcaa7422-b9ff-58d5-b1f9-48542932a83d", 00:24:14.339 "is_configured": true, 00:24:14.339 "data_offset": 2048, 00:24:14.339 "data_size": 63488 00:24:14.339 }, 00:24:14.339 { 00:24:14.339 "name": "BaseBdev2", 00:24:14.339 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:14.339 "is_configured": true, 00:24:14.339 "data_offset": 2048, 00:24:14.339 "data_size": 63488 00:24:14.339 } 00:24:14.339 ] 00:24:14.339 }' 00:24:14.339 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:14.339 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.339 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:14.339 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.339 15:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:14.596 [2024-07-23 15:18:09.942136] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:14.854 [2024-07-23 15:18:10.040314] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:14.854 [2024-07-23 15:18:10.040401] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.854 [2024-07-23 15:18:10.040418] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:14.854 [2024-07-23 15:18:10.040430] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.854 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.112 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.112 "name": "raid_bdev1", 00:24:15.112 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:15.112 "strip_size_kb": 0, 00:24:15.112 "state": "online", 00:24:15.112 "raid_level": "raid1", 00:24:15.112 "superblock": true, 00:24:15.112 "num_base_bdevs": 2, 00:24:15.112 "num_base_bdevs_discovered": 1, 00:24:15.112 "num_base_bdevs_operational": 1, 00:24:15.112 "base_bdevs_list": [ 00:24:15.112 { 00:24:15.112 "name": null, 00:24:15.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.112 "is_configured": false, 00:24:15.112 "data_offset": 2048, 00:24:15.112 "data_size": 63488 00:24:15.112 }, 00:24:15.112 { 00:24:15.112 "name": "BaseBdev2", 00:24:15.112 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:15.112 "is_configured": true, 00:24:15.112 "data_offset": 2048, 00:24:15.112 "data_size": 63488 00:24:15.112 } 00:24:15.112 ] 00:24:15.112 }' 00:24:15.112 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.112 15:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.370 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:15.370 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:15.370 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:15.370 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:15.370 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:15.370 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.370 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.629 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:15.629 "name": "raid_bdev1", 00:24:15.629 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:15.629 "strip_size_kb": 0, 00:24:15.629 "state": "online", 00:24:15.629 "raid_level": "raid1", 00:24:15.629 "superblock": true, 00:24:15.629 "num_base_bdevs": 2, 00:24:15.629 "num_base_bdevs_discovered": 1, 00:24:15.629 "num_base_bdevs_operational": 1, 00:24:15.629 "base_bdevs_list": [ 00:24:15.629 { 00:24:15.629 "name": null, 00:24:15.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.629 "is_configured": false, 00:24:15.629 "data_offset": 2048, 00:24:15.629 "data_size": 63488 00:24:15.629 }, 00:24:15.629 { 00:24:15.629 "name": "BaseBdev2", 00:24:15.629 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:15.629 "is_configured": true, 00:24:15.629 "data_offset": 2048, 00:24:15.629 "data_size": 63488 00:24:15.629 } 00:24:15.629 ] 00:24:15.629 }' 00:24:15.629 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:15.629 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:15.629 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:15.629 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:15.629 15:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:15.887 15:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:15.887 [2024-07-23 15:18:11.267011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:15.887 [2024-07-23 15:18:11.267095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.887 [2024-07-23 15:18:11.267125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:24:15.887 [2024-07-23 15:18:11.267140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.887 [2024-07-23 15:18:11.267566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.887 [2024-07-23 15:18:11.267590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:15.887 [2024-07-23 15:18:11.267670] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:15.887 [2024-07-23 15:18:11.267688] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:15.887 [2024-07-23 15:18:11.267698] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:15.887 BaseBdev1 00:24:15.887 15:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.266 "name": "raid_bdev1", 00:24:17.266 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:17.266 "strip_size_kb": 0, 00:24:17.266 "state": "online", 00:24:17.266 "raid_level": "raid1", 00:24:17.266 "superblock": true, 00:24:17.266 "num_base_bdevs": 2, 00:24:17.266 "num_base_bdevs_discovered": 1, 00:24:17.266 "num_base_bdevs_operational": 1, 00:24:17.266 "base_bdevs_list": [ 00:24:17.266 { 00:24:17.266 "name": null, 00:24:17.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.266 "is_configured": false, 00:24:17.266 "data_offset": 2048, 00:24:17.266 "data_size": 63488 00:24:17.266 }, 00:24:17.266 { 00:24:17.266 "name": "BaseBdev2", 00:24:17.266 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:17.266 "is_configured": true, 00:24:17.266 "data_offset": 2048, 00:24:17.266 "data_size": 63488 00:24:17.266 } 00:24:17.266 ] 00:24:17.266 }' 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.266 15:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.525 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.525 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:17.525 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:17.525 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:17.525 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:17.525 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.525 15:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:17.784 "name": "raid_bdev1", 00:24:17.784 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:17.784 "strip_size_kb": 0, 00:24:17.784 "state": "online", 00:24:17.784 "raid_level": "raid1", 00:24:17.784 "superblock": true, 00:24:17.784 "num_base_bdevs": 2, 00:24:17.784 "num_base_bdevs_discovered": 1, 00:24:17.784 "num_base_bdevs_operational": 1, 00:24:17.784 "base_bdevs_list": [ 00:24:17.784 { 00:24:17.784 "name": null, 00:24:17.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.784 "is_configured": false, 00:24:17.784 "data_offset": 2048, 00:24:17.784 "data_size": 63488 00:24:17.784 }, 00:24:17.784 { 00:24:17.784 "name": "BaseBdev2", 00:24:17.784 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:17.784 "is_configured": true, 00:24:17.784 "data_offset": 2048, 00:24:17.784 "data_size": 63488 00:24:17.784 } 00:24:17.784 ] 00:24:17.784 }' 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:17.784 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:18.043 [2024-07-23 15:18:13.398492] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:18.043 [2024-07-23 15:18:13.398665] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:18.043 [2024-07-23 15:18:13.398683] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:18.043 request: 00:24:18.043 { 00:24:18.043 "base_bdev": "BaseBdev1", 00:24:18.043 "raid_bdev": "raid_bdev1", 00:24:18.043 "method": "bdev_raid_add_base_bdev", 00:24:18.043 "req_id": 1 00:24:18.043 } 00:24:18.043 Got JSON-RPC error response 00:24:18.043 response: 00:24:18.043 { 00:24:18.043 "code": -22, 00:24:18.043 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:18.043 } 00:24:18.043 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:24:18.043 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:18.043 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:18.043 15:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:18.043 15:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.419 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.419 "name": "raid_bdev1", 00:24:19.419 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:19.419 "strip_size_kb": 0, 00:24:19.419 "state": "online", 00:24:19.419 "raid_level": "raid1", 00:24:19.419 "superblock": true, 00:24:19.419 "num_base_bdevs": 2, 00:24:19.419 "num_base_bdevs_discovered": 1, 00:24:19.419 "num_base_bdevs_operational": 1, 00:24:19.419 "base_bdevs_list": [ 00:24:19.419 { 00:24:19.419 "name": null, 00:24:19.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.419 "is_configured": false, 00:24:19.420 "data_offset": 2048, 00:24:19.420 "data_size": 63488 00:24:19.420 }, 00:24:19.420 { 00:24:19.420 "name": "BaseBdev2", 00:24:19.420 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:19.420 "is_configured": true, 00:24:19.420 "data_offset": 2048, 00:24:19.420 "data_size": 63488 00:24:19.420 } 00:24:19.420 ] 00:24:19.420 }' 00:24:19.420 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.420 15:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.678 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:19.678 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:19.678 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:19.678 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:19.678 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:19.678 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.678 15:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:19.937 "name": "raid_bdev1", 00:24:19.937 "uuid": "431153b6-c4d5-4a11-80c4-1d67d8da7977", 00:24:19.937 "strip_size_kb": 0, 00:24:19.937 "state": "online", 00:24:19.937 "raid_level": "raid1", 00:24:19.937 "superblock": true, 00:24:19.937 "num_base_bdevs": 2, 00:24:19.937 "num_base_bdevs_discovered": 1, 00:24:19.937 "num_base_bdevs_operational": 1, 00:24:19.937 "base_bdevs_list": [ 00:24:19.937 { 00:24:19.937 "name": null, 00:24:19.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.937 "is_configured": false, 00:24:19.937 "data_offset": 2048, 00:24:19.937 "data_size": 63488 00:24:19.937 }, 00:24:19.937 { 00:24:19.937 "name": "BaseBdev2", 00:24:19.937 "uuid": "94e2bd39-aca4-5fb4-bb27-f382c763a636", 00:24:19.937 "is_configured": true, 00:24:19.937 "data_offset": 2048, 00:24:19.937 "data_size": 63488 00:24:19.937 } 00:24:19.937 ] 00:24:19.937 }' 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 107827 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 107827 ']' 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 107827 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107827 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.937 killing process with pid 107827 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107827' 00:24:19.937 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 107827 00:24:19.937 Received shutdown signal, test time was about 60.000000 seconds 00:24:19.937 00:24:19.937 Latency(us) 00:24:19.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.938 =================================================================================================================== 00:24:19.938 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.938 [2024-07-23 15:18:15.252678] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:19.938 [2024-07-23 15:18:15.252811] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:19.938 [2024-07-23 15:18:15.252860] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:19.938 [2024-07-23 15:18:15.252874] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:24:19.938 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 107827 00:24:19.938 [2024-07-23 15:18:15.283864] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:24:20.196 00:24:20.196 real 0m30.927s 00:24:20.196 user 0m42.050s 00:24:20.196 sys 0m6.446s 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.196 ************************************ 00:24:20.196 END TEST raid_rebuild_test_sb 00:24:20.196 ************************************ 00:24:20.196 15:18:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:20.196 15:18:15 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:24:20.196 15:18:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:20.196 15:18:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.196 15:18:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:20.196 ************************************ 00:24:20.196 START TEST raid_rebuild_test_io 00:24:20.196 ************************************ 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=108668 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 108668 /var/tmp/spdk-raid.sock 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 108668 ']' 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.196 15:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:20.455 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:20.455 Zero copy mechanism will not be used. 00:24:20.455 [2024-07-23 15:18:15.662800] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:24:20.455 [2024-07-23 15:18:15.662952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108668 ] 00:24:20.455 [2024-07-23 15:18:15.800236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.455 [2024-07-23 15:18:15.846876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.714 [2024-07-23 15:18:15.891483] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.281 15:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.281 15:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:24:21.281 15:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:21.281 15:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:21.281 BaseBdev1_malloc 00:24:21.539 15:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:21.539 [2024-07-23 15:18:16.870697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:21.539 [2024-07-23 15:18:16.870780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.539 [2024-07-23 15:18:16.870825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:24:21.539 [2024-07-23 15:18:16.870839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.539 [2024-07-23 15:18:16.873434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.539 [2024-07-23 15:18:16.873479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:21.539 BaseBdev1 00:24:21.539 15:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:21.539 15:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:21.797 BaseBdev2_malloc 00:24:21.797 15:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:22.055 [2024-07-23 15:18:17.276115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:22.055 [2024-07-23 15:18:17.276186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.055 [2024-07-23 15:18:17.276215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:24:22.055 [2024-07-23 15:18:17.276228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.055 [2024-07-23 15:18:17.278869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.055 [2024-07-23 15:18:17.278908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:22.055 BaseBdev2 00:24:22.055 15:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:22.313 spare_malloc 00:24:22.313 15:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:22.313 spare_delay 00:24:22.313 15:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:22.571 [2024-07-23 15:18:17.882522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:22.571 [2024-07-23 15:18:17.882609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.571 [2024-07-23 15:18:17.882643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:24:22.571 [2024-07-23 15:18:17.882656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.571 [2024-07-23 15:18:17.885183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.571 [2024-07-23 15:18:17.885223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:22.571 spare 00:24:22.571 15:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:22.830 [2024-07-23 15:18:18.058641] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:22.830 [2024-07-23 15:18:18.060928] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:22.830 [2024-07-23 15:18:18.061056] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:24:22.830 [2024-07-23 15:18:18.061068] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:22.830 [2024-07-23 15:18:18.061231] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:24:22.830 [2024-07-23 15:18:18.061593] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:24:22.830 [2024-07-23 15:18:18.061620] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:24:22.830 [2024-07-23 15:18:18.061810] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.830 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.089 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.089 "name": "raid_bdev1", 00:24:23.089 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:23.089 "strip_size_kb": 0, 00:24:23.089 "state": "online", 00:24:23.089 "raid_level": "raid1", 00:24:23.089 "superblock": false, 00:24:23.089 "num_base_bdevs": 2, 00:24:23.089 "num_base_bdevs_discovered": 2, 00:24:23.089 "num_base_bdevs_operational": 2, 00:24:23.089 "base_bdevs_list": [ 00:24:23.089 { 00:24:23.089 "name": "BaseBdev1", 00:24:23.089 "uuid": "64639b72-9638-5479-9d3d-c33a3ebae055", 00:24:23.089 "is_configured": true, 00:24:23.089 "data_offset": 0, 00:24:23.089 "data_size": 65536 00:24:23.089 }, 00:24:23.089 { 00:24:23.089 "name": "BaseBdev2", 00:24:23.089 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:23.089 "is_configured": true, 00:24:23.089 "data_offset": 0, 00:24:23.089 "data_size": 65536 00:24:23.089 } 00:24:23.089 ] 00:24:23.089 }' 00:24:23.089 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.089 15:18:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:23.348 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:23.348 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:23.348 [2024-07-23 15:18:18.690968] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:23.348 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:24:23.348 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:23.348 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.607 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:24:23.607 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:23.607 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:23.607 15:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:23.866 [2024-07-23 15:18:19.060907] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:24:23.866 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:23.866 Zero copy mechanism will not be used. 00:24:23.866 Running I/O for 60 seconds... 00:24:23.866 [2024-07-23 15:18:19.216780] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.866 [2024-07-23 15:18:19.222760] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d0000022c0 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.866 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.124 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.124 "name": "raid_bdev1", 00:24:24.124 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:24.124 "strip_size_kb": 0, 00:24:24.124 "state": "online", 00:24:24.124 "raid_level": "raid1", 00:24:24.124 "superblock": false, 00:24:24.124 "num_base_bdevs": 2, 00:24:24.124 "num_base_bdevs_discovered": 1, 00:24:24.124 "num_base_bdevs_operational": 1, 00:24:24.124 "base_bdevs_list": [ 00:24:24.124 { 00:24:24.124 "name": null, 00:24:24.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.124 "is_configured": false, 00:24:24.124 "data_offset": 0, 00:24:24.124 "data_size": 65536 00:24:24.124 }, 00:24:24.124 { 00:24:24.124 "name": "BaseBdev2", 00:24:24.124 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:24.124 "is_configured": true, 00:24:24.124 "data_offset": 0, 00:24:24.124 "data_size": 65536 00:24:24.124 } 00:24:24.124 ] 00:24:24.124 }' 00:24:24.124 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.125 15:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:24.383 15:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:24.641 [2024-07-23 15:18:19.992093] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:24.641 [2024-07-23 15:18:20.038243] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:24:24.641 [2024-07-23 15:18:20.040425] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:24.641 15:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:24.924 [2024-07-23 15:18:20.147733] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:24.924 [2024-07-23 15:18:20.148219] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:25.216 [2024-07-23 15:18:20.356114] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:25.216 [2024-07-23 15:18:20.356334] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:25.216 [2024-07-23 15:18:20.587737] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:25.474 [2024-07-23 15:18:20.809525] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:25.474 [2024-07-23 15:18:20.809837] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:25.732 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.732 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:25.732 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:25.732 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:25.732 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:25.733 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.733 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.733 [2024-07-23 15:18:21.128656] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:25.990 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:25.990 "name": "raid_bdev1", 00:24:25.990 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:25.990 "strip_size_kb": 0, 00:24:25.990 "state": "online", 00:24:25.990 "raid_level": "raid1", 00:24:25.990 "superblock": false, 00:24:25.990 "num_base_bdevs": 2, 00:24:25.990 "num_base_bdevs_discovered": 2, 00:24:25.990 "num_base_bdevs_operational": 2, 00:24:25.990 "process": { 00:24:25.990 "type": "rebuild", 00:24:25.990 "target": "spare", 00:24:25.990 "progress": { 00:24:25.990 "blocks": 14336, 00:24:25.990 "percent": 21 00:24:25.990 } 00:24:25.990 }, 00:24:25.990 "base_bdevs_list": [ 00:24:25.990 { 00:24:25.990 "name": "spare", 00:24:25.990 "uuid": "c264943f-e4fc-596b-ba72-c2b038927e73", 00:24:25.990 "is_configured": true, 00:24:25.990 "data_offset": 0, 00:24:25.990 "data_size": 65536 00:24:25.990 }, 00:24:25.990 { 00:24:25.990 "name": "BaseBdev2", 00:24:25.990 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:25.990 "is_configured": true, 00:24:25.990 "data_offset": 0, 00:24:25.990 "data_size": 65536 00:24:25.991 } 00:24:25.991 ] 00:24:25.991 }' 00:24:25.991 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:25.991 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.991 [2024-07-23 15:18:21.237060] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:25.991 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:25.991 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.991 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:26.249 [2024-07-23 15:18:21.457980] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:26.249 [2024-07-23 15:18:21.465719] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:26.249 [2024-07-23 15:18:21.466136] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:26.249 [2024-07-23 15:18:21.567174] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:26.249 [2024-07-23 15:18:21.581110] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.249 [2024-07-23 15:18:21.581164] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:26.249 [2024-07-23 15:18:21.581177] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:26.249 [2024-07-23 15:18:21.604608] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d0000022c0 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.249 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.508 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.508 "name": "raid_bdev1", 00:24:26.508 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:26.508 "strip_size_kb": 0, 00:24:26.508 "state": "online", 00:24:26.508 "raid_level": "raid1", 00:24:26.508 "superblock": false, 00:24:26.508 "num_base_bdevs": 2, 00:24:26.508 "num_base_bdevs_discovered": 1, 00:24:26.508 "num_base_bdevs_operational": 1, 00:24:26.508 "base_bdevs_list": [ 00:24:26.508 { 00:24:26.508 "name": null, 00:24:26.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.508 "is_configured": false, 00:24:26.508 "data_offset": 0, 00:24:26.508 "data_size": 65536 00:24:26.508 }, 00:24:26.508 { 00:24:26.508 "name": "BaseBdev2", 00:24:26.508 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:26.508 "is_configured": true, 00:24:26.508 "data_offset": 0, 00:24:26.508 "data_size": 65536 00:24:26.508 } 00:24:26.508 ] 00:24:26.508 }' 00:24:26.508 15:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.508 15:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:26.767 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:26.767 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:26.767 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:26.767 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:26.767 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:26.767 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.767 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.025 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:27.025 "name": "raid_bdev1", 00:24:27.025 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:27.025 "strip_size_kb": 0, 00:24:27.025 "state": "online", 00:24:27.025 "raid_level": "raid1", 00:24:27.025 "superblock": false, 00:24:27.025 "num_base_bdevs": 2, 00:24:27.025 "num_base_bdevs_discovered": 1, 00:24:27.025 "num_base_bdevs_operational": 1, 00:24:27.025 "base_bdevs_list": [ 00:24:27.025 { 00:24:27.025 "name": null, 00:24:27.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.025 "is_configured": false, 00:24:27.025 "data_offset": 0, 00:24:27.025 "data_size": 65536 00:24:27.025 }, 00:24:27.025 { 00:24:27.025 "name": "BaseBdev2", 00:24:27.025 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:27.025 "is_configured": true, 00:24:27.025 "data_offset": 0, 00:24:27.025 "data_size": 65536 00:24:27.025 } 00:24:27.025 ] 00:24:27.025 }' 00:24:27.025 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:27.025 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:27.026 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:27.026 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:27.026 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:27.284 [2024-07-23 15:18:22.684407] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:27.543 15:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:27.543 [2024-07-23 15:18:22.735244] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:24:27.543 [2024-07-23 15:18:22.737380] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:27.543 [2024-07-23 15:18:22.845471] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:27.543 [2024-07-23 15:18:22.845997] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:27.802 [2024-07-23 15:18:23.053223] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:27.802 [2024-07-23 15:18:23.053455] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:28.370 [2024-07-23 15:18:23.514712] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:28.370 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.370 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:28.370 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:28.370 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:28.370 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:28.370 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.370 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.628 [2024-07-23 15:18:23.844133] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:28.628 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.628 "name": "raid_bdev1", 00:24:28.628 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:28.628 "strip_size_kb": 0, 00:24:28.628 "state": "online", 00:24:28.629 "raid_level": "raid1", 00:24:28.629 "superblock": false, 00:24:28.629 "num_base_bdevs": 2, 00:24:28.629 "num_base_bdevs_discovered": 2, 00:24:28.629 "num_base_bdevs_operational": 2, 00:24:28.629 "process": { 00:24:28.629 "type": "rebuild", 00:24:28.629 "target": "spare", 00:24:28.629 "progress": { 00:24:28.629 "blocks": 14336, 00:24:28.629 "percent": 21 00:24:28.629 } 00:24:28.629 }, 00:24:28.629 "base_bdevs_list": [ 00:24:28.629 { 00:24:28.629 "name": "spare", 00:24:28.629 "uuid": "c264943f-e4fc-596b-ba72-c2b038927e73", 00:24:28.629 "is_configured": true, 00:24:28.629 "data_offset": 0, 00:24:28.629 "data_size": 65536 00:24:28.629 }, 00:24:28.629 { 00:24:28.629 "name": "BaseBdev2", 00:24:28.629 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:28.629 "is_configured": true, 00:24:28.629 "data_offset": 0, 00:24:28.629 "data_size": 65536 00:24:28.629 } 00:24:28.629 ] 00:24:28.629 }' 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:28.629 [2024-07-23 15:18:23.983958] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=623 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.629 15:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.887 15:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.887 "name": "raid_bdev1", 00:24:28.887 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:28.887 "strip_size_kb": 0, 00:24:28.887 "state": "online", 00:24:28.887 "raid_level": "raid1", 00:24:28.887 "superblock": false, 00:24:28.887 "num_base_bdevs": 2, 00:24:28.887 "num_base_bdevs_discovered": 2, 00:24:28.887 "num_base_bdevs_operational": 2, 00:24:28.887 "process": { 00:24:28.887 "type": "rebuild", 00:24:28.887 "target": "spare", 00:24:28.887 "progress": { 00:24:28.887 "blocks": 16384, 00:24:28.887 "percent": 25 00:24:28.887 } 00:24:28.887 }, 00:24:28.887 "base_bdevs_list": [ 00:24:28.887 { 00:24:28.887 "name": "spare", 00:24:28.888 "uuid": "c264943f-e4fc-596b-ba72-c2b038927e73", 00:24:28.888 "is_configured": true, 00:24:28.888 "data_offset": 0, 00:24:28.888 "data_size": 65536 00:24:28.888 }, 00:24:28.888 { 00:24:28.888 "name": "BaseBdev2", 00:24:28.888 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:28.888 "is_configured": true, 00:24:28.888 "data_offset": 0, 00:24:28.888 "data_size": 65536 00:24:28.888 } 00:24:28.888 ] 00:24:28.888 }' 00:24:28.888 15:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:28.888 15:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.888 15:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:28.888 15:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.888 15:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:29.146 [2024-07-23 15:18:24.431161] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:29.405 [2024-07-23 15:18:24.646162] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:29.405 [2024-07-23 15:18:24.753007] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:29.663 [2024-07-23 15:18:25.074538] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:29.922 [2024-07-23 15:18:25.194446] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:29.922 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:29.922 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.922 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:29.922 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:29.922 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:29.922 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:29.922 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.922 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.180 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:30.180 "name": "raid_bdev1", 00:24:30.180 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:30.180 "strip_size_kb": 0, 00:24:30.180 "state": "online", 00:24:30.180 "raid_level": "raid1", 00:24:30.180 "superblock": false, 00:24:30.180 "num_base_bdevs": 2, 00:24:30.180 "num_base_bdevs_discovered": 2, 00:24:30.180 "num_base_bdevs_operational": 2, 00:24:30.180 "process": { 00:24:30.180 "type": "rebuild", 00:24:30.180 "target": "spare", 00:24:30.180 "progress": { 00:24:30.180 "blocks": 34816, 00:24:30.180 "percent": 53 00:24:30.180 } 00:24:30.180 }, 00:24:30.180 "base_bdevs_list": [ 00:24:30.180 { 00:24:30.180 "name": "spare", 00:24:30.180 "uuid": "c264943f-e4fc-596b-ba72-c2b038927e73", 00:24:30.180 "is_configured": true, 00:24:30.180 "data_offset": 0, 00:24:30.180 "data_size": 65536 00:24:30.180 }, 00:24:30.180 { 00:24:30.180 "name": "BaseBdev2", 00:24:30.180 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:30.180 "is_configured": true, 00:24:30.180 "data_offset": 0, 00:24:30.180 "data_size": 65536 00:24:30.180 } 00:24:30.180 ] 00:24:30.180 }' 00:24:30.180 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:30.180 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:30.180 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:30.180 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:30.180 15:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:30.439 [2024-07-23 15:18:25.653014] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:30.697 [2024-07-23 15:18:25.966532] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:31.264 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:31.264 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.264 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:31.264 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:31.265 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:31.265 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:31.265 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.265 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.523 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:31.523 "name": "raid_bdev1", 00:24:31.523 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:31.523 "strip_size_kb": 0, 00:24:31.523 "state": "online", 00:24:31.523 "raid_level": "raid1", 00:24:31.523 "superblock": false, 00:24:31.523 "num_base_bdevs": 2, 00:24:31.523 "num_base_bdevs_discovered": 2, 00:24:31.523 "num_base_bdevs_operational": 2, 00:24:31.523 "process": { 00:24:31.523 "type": "rebuild", 00:24:31.523 "target": "spare", 00:24:31.523 "progress": { 00:24:31.523 "blocks": 59392, 00:24:31.523 "percent": 90 00:24:31.523 } 00:24:31.523 }, 00:24:31.523 "base_bdevs_list": [ 00:24:31.523 { 00:24:31.523 "name": "spare", 00:24:31.523 "uuid": "c264943f-e4fc-596b-ba72-c2b038927e73", 00:24:31.523 "is_configured": true, 00:24:31.523 "data_offset": 0, 00:24:31.523 "data_size": 65536 00:24:31.523 }, 00:24:31.523 { 00:24:31.523 "name": "BaseBdev2", 00:24:31.523 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:31.523 "is_configured": true, 00:24:31.523 "data_offset": 0, 00:24:31.523 "data_size": 65536 00:24:31.523 } 00:24:31.523 ] 00:24:31.523 }' 00:24:31.523 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:31.523 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:31.523 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:31.523 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:31.523 15:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:31.523 [2024-07-23 15:18:26.938044] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:31.782 [2024-07-23 15:18:27.038083] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:31.782 [2024-07-23 15:18:27.045970] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.718 15:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:32.718 15:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.718 15:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:32.718 15:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:32.718 15:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:32.718 15:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:32.718 15:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.718 15:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:32.718 "name": "raid_bdev1", 00:24:32.718 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:32.718 "strip_size_kb": 0, 00:24:32.718 "state": "online", 00:24:32.718 "raid_level": "raid1", 00:24:32.718 "superblock": false, 00:24:32.718 "num_base_bdevs": 2, 00:24:32.718 "num_base_bdevs_discovered": 2, 00:24:32.718 "num_base_bdevs_operational": 2, 00:24:32.718 "base_bdevs_list": [ 00:24:32.718 { 00:24:32.718 "name": "spare", 00:24:32.718 "uuid": "c264943f-e4fc-596b-ba72-c2b038927e73", 00:24:32.718 "is_configured": true, 00:24:32.718 "data_offset": 0, 00:24:32.718 "data_size": 65536 00:24:32.718 }, 00:24:32.718 { 00:24:32.718 "name": "BaseBdev2", 00:24:32.718 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:32.718 "is_configured": true, 00:24:32.718 "data_offset": 0, 00:24:32.718 "data_size": 65536 00:24:32.718 } 00:24:32.718 ] 00:24:32.718 }' 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.718 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:32.977 "name": "raid_bdev1", 00:24:32.977 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:32.977 "strip_size_kb": 0, 00:24:32.977 "state": "online", 00:24:32.977 "raid_level": "raid1", 00:24:32.977 "superblock": false, 00:24:32.977 "num_base_bdevs": 2, 00:24:32.977 "num_base_bdevs_discovered": 2, 00:24:32.977 "num_base_bdevs_operational": 2, 00:24:32.977 "base_bdevs_list": [ 00:24:32.977 { 00:24:32.977 "name": "spare", 00:24:32.977 "uuid": "c264943f-e4fc-596b-ba72-c2b038927e73", 00:24:32.977 "is_configured": true, 00:24:32.977 "data_offset": 0, 00:24:32.977 "data_size": 65536 00:24:32.977 }, 00:24:32.977 { 00:24:32.977 "name": "BaseBdev2", 00:24:32.977 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:32.977 "is_configured": true, 00:24:32.977 "data_offset": 0, 00:24:32.977 "data_size": 65536 00:24:32.977 } 00:24:32.977 ] 00:24:32.977 }' 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.977 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.249 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.249 "name": "raid_bdev1", 00:24:33.249 "uuid": "11c2dbe2-7410-40fa-96ef-6e053d6b1156", 00:24:33.249 "strip_size_kb": 0, 00:24:33.249 "state": "online", 00:24:33.249 "raid_level": "raid1", 00:24:33.249 "superblock": false, 00:24:33.249 "num_base_bdevs": 2, 00:24:33.249 "num_base_bdevs_discovered": 2, 00:24:33.249 "num_base_bdevs_operational": 2, 00:24:33.249 "base_bdevs_list": [ 00:24:33.249 { 00:24:33.249 "name": "spare", 00:24:33.249 "uuid": "c264943f-e4fc-596b-ba72-c2b038927e73", 00:24:33.249 "is_configured": true, 00:24:33.249 "data_offset": 0, 00:24:33.249 "data_size": 65536 00:24:33.249 }, 00:24:33.249 { 00:24:33.249 "name": "BaseBdev2", 00:24:33.249 "uuid": "78498c95-5ed5-52d4-9ae0-77e79b1c7509", 00:24:33.249 "is_configured": true, 00:24:33.249 "data_offset": 0, 00:24:33.249 "data_size": 65536 00:24:33.249 } 00:24:33.249 ] 00:24:33.249 }' 00:24:33.249 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.249 15:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.828 15:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:33.828 [2024-07-23 15:18:29.211393] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.828 [2024-07-23 15:18:29.211442] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:34.087 00:24:34.087 Latency(us) 00:24:34.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.087 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:34.087 raid_bdev1 : 10.24 102.30 306.91 0.00 0.00 13433.83 278.92 114344.72 00:24:34.087 =================================================================================================================== 00:24:34.087 Total : 102.30 306.91 0.00 0.00 13433.83 278.92 114344.72 00:24:34.087 [2024-07-23 15:18:29.311146] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.087 [2024-07-23 15:18:29.311202] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:34.087 [2024-07-23 15:18:29.311286] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:34.087 [2024-07-23 15:18:29.311302] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:24:34.087 0 00:24:34.087 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:34.087 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:34.347 /dev/nbd0 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:34.347 1+0 records in 00:24:34.347 1+0 records out 00:24:34.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224478 s, 18.2 MB/s 00:24:34.347 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.606 15:18:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:34.865 /dev/nbd1 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:34.865 1+0 records in 00:24:34.865 1+0 records out 00:24:34.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296808 s, 13.8 MB/s 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:34.865 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:35.125 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:35.384 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 108668 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 108668 ']' 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 108668 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108668 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:35.385 killing process with pid 108668 00:24:35.385 Received shutdown signal, test time was about 11.672882 seconds 00:24:35.385 00:24:35.385 Latency(us) 00:24:35.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.385 =================================================================================================================== 00:24:35.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108668' 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 108668 00:24:35.385 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 108668 00:24:35.385 [2024-07-23 15:18:30.736310] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:35.385 [2024-07-23 15:18:30.761688] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:35.645 15:18:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:24:35.645 00:24:35.645 real 0m15.398s 00:24:35.645 user 0m22.015s 00:24:35.645 sys 0m2.590s 00:24:35.645 ************************************ 00:24:35.645 END TEST raid_rebuild_test_io 00:24:35.645 ************************************ 00:24:35.645 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.645 15:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:35.645 15:18:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:35.645 15:18:31 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:24:35.645 15:18:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:35.645 15:18:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.645 15:18:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:35.645 ************************************ 00:24:35.645 START TEST raid_rebuild_test_sb_io 00:24:35.645 ************************************ 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:24:35.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=109097 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 109097 /var/tmp/spdk-raid.sock 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 109097 ']' 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:35.645 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.905 15:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:35.905 [2024-07-23 15:18:31.141029] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:24:35.905 [2024-07-23 15:18:31.141254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:24:35.905 Zero copy mechanism will not be used. 00:24:35.905 -allocations --file-prefix=spdk_pid109097 ] 00:24:35.905 [2024-07-23 15:18:31.292101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.163 [2024-07-23 15:18:31.337717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.163 [2024-07-23 15:18:31.382397] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.731 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.731 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:24:36.731 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:36.731 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:36.990 BaseBdev1_malloc 00:24:36.990 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:37.249 [2024-07-23 15:18:32.421512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:37.249 [2024-07-23 15:18:32.421611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.249 [2024-07-23 15:18:32.421656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:24:37.249 [2024-07-23 15:18:32.421671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.249 [2024-07-23 15:18:32.424408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.249 [2024-07-23 15:18:32.424460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:37.249 BaseBdev1 00:24:37.249 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:24:37.249 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:37.249 BaseBdev2_malloc 00:24:37.249 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:37.508 [2024-07-23 15:18:32.846945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:37.508 [2024-07-23 15:18:32.847021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.508 [2024-07-23 15:18:32.847054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:24:37.508 [2024-07-23 15:18:32.847067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.508 [2024-07-23 15:18:32.849504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.508 [2024-07-23 15:18:32.849544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:37.508 BaseBdev2 00:24:37.508 15:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:37.766 spare_malloc 00:24:37.766 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:38.027 spare_delay 00:24:38.027 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:38.027 [2024-07-23 15:18:33.386283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:38.027 [2024-07-23 15:18:33.386363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.027 [2024-07-23 15:18:33.386398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:24:38.027 [2024-07-23 15:18:33.386411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.027 [2024-07-23 15:18:33.388958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.027 [2024-07-23 15:18:33.388998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:38.027 spare 00:24:38.027 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:38.288 [2024-07-23 15:18:33.550441] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.288 [2024-07-23 15:18:33.552632] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:38.288 [2024-07-23 15:18:33.552824] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:24:38.288 [2024-07-23 15:18:33.552838] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:38.288 [2024-07-23 15:18:33.552974] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:24:38.288 [2024-07-23 15:18:33.553322] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:24:38.288 [2024-07-23 15:18:33.553343] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:24:38.288 [2024-07-23 15:18:33.553465] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.288 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.546 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.546 "name": "raid_bdev1", 00:24:38.546 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:38.546 "strip_size_kb": 0, 00:24:38.546 "state": "online", 00:24:38.546 "raid_level": "raid1", 00:24:38.546 "superblock": true, 00:24:38.546 "num_base_bdevs": 2, 00:24:38.546 "num_base_bdevs_discovered": 2, 00:24:38.546 "num_base_bdevs_operational": 2, 00:24:38.546 "base_bdevs_list": [ 00:24:38.546 { 00:24:38.546 "name": "BaseBdev1", 00:24:38.546 "uuid": "10e79c97-9f51-5511-8278-51498c9edb9e", 00:24:38.546 "is_configured": true, 00:24:38.546 "data_offset": 2048, 00:24:38.546 "data_size": 63488 00:24:38.546 }, 00:24:38.546 { 00:24:38.546 "name": "BaseBdev2", 00:24:38.546 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:38.546 "is_configured": true, 00:24:38.547 "data_offset": 2048, 00:24:38.547 "data_size": 63488 00:24:38.547 } 00:24:38.547 ] 00:24:38.547 }' 00:24:38.547 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.547 15:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.805 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:38.805 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:24:39.063 [2024-07-23 15:18:34.250771] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:39.063 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:24:39.063 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:39.063 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.063 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:24:39.063 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:24:39.063 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:39.063 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:39.323 [2024-07-23 15:18:34.528701] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:24:39.323 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:39.323 Zero copy mechanism will not be used. 00:24:39.323 Running I/O for 60 seconds... 00:24:39.323 [2024-07-23 15:18:34.617569] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:39.323 [2024-07-23 15:18:34.617839] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d0000022c0 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.323 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.582 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.582 "name": "raid_bdev1", 00:24:39.582 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:39.582 "strip_size_kb": 0, 00:24:39.582 "state": "online", 00:24:39.582 "raid_level": "raid1", 00:24:39.582 "superblock": true, 00:24:39.582 "num_base_bdevs": 2, 00:24:39.582 "num_base_bdevs_discovered": 1, 00:24:39.582 "num_base_bdevs_operational": 1, 00:24:39.582 "base_bdevs_list": [ 00:24:39.582 { 00:24:39.582 "name": null, 00:24:39.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.582 "is_configured": false, 00:24:39.582 "data_offset": 2048, 00:24:39.582 "data_size": 63488 00:24:39.582 }, 00:24:39.582 { 00:24:39.582 "name": "BaseBdev2", 00:24:39.582 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:39.582 "is_configured": true, 00:24:39.582 "data_offset": 2048, 00:24:39.582 "data_size": 63488 00:24:39.582 } 00:24:39.582 ] 00:24:39.582 }' 00:24:39.582 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.582 15:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 15:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:40.100 [2024-07-23 15:18:35.375124] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.100 [2024-07-23 15:18:35.425614] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:24:40.100 15:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:24:40.100 [2024-07-23 15:18:35.428084] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:40.358 [2024-07-23 15:18:35.544135] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:40.358 [2024-07-23 15:18:35.544606] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:40.358 [2024-07-23 15:18:35.772511] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:40.358 [2024-07-23 15:18:35.772804] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:40.925 [2024-07-23 15:18:36.217728] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:40.925 [2024-07-23 15:18:36.218036] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:41.183 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.183 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:41.183 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:41.183 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:41.183 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.183 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.183 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.183 [2024-07-23 15:18:36.466065] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:41.442 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.442 "name": "raid_bdev1", 00:24:41.442 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:41.442 "strip_size_kb": 0, 00:24:41.442 "state": "online", 00:24:41.442 "raid_level": "raid1", 00:24:41.442 "superblock": true, 00:24:41.442 "num_base_bdevs": 2, 00:24:41.442 "num_base_bdevs_discovered": 2, 00:24:41.442 "num_base_bdevs_operational": 2, 00:24:41.442 "process": { 00:24:41.442 "type": "rebuild", 00:24:41.442 "target": "spare", 00:24:41.442 "progress": { 00:24:41.442 "blocks": 14336, 00:24:41.442 "percent": 22 00:24:41.442 } 00:24:41.442 }, 00:24:41.442 "base_bdevs_list": [ 00:24:41.442 { 00:24:41.442 "name": "spare", 00:24:41.442 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:41.442 "is_configured": true, 00:24:41.442 "data_offset": 2048, 00:24:41.442 "data_size": 63488 00:24:41.442 }, 00:24:41.442 { 00:24:41.442 "name": "BaseBdev2", 00:24:41.442 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:41.442 "is_configured": true, 00:24:41.442 "data_offset": 2048, 00:24:41.442 "data_size": 63488 00:24:41.442 } 00:24:41.442 ] 00:24:41.442 }' 00:24:41.442 [2024-07-23 15:18:36.682121] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:41.442 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:41.442 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.442 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:41.442 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.442 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:41.442 [2024-07-23 15:18:36.851779] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.701 [2024-07-23 15:18:36.918738] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:41.701 [2024-07-23 15:18:36.926529] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.701 [2024-07-23 15:18:36.926572] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.701 [2024-07-23 15:18:36.926588] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:41.701 [2024-07-23 15:18:36.944422] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d0000022c0 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.701 15:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.965 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:41.965 "name": "raid_bdev1", 00:24:41.965 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:41.965 "strip_size_kb": 0, 00:24:41.965 "state": "online", 00:24:41.965 "raid_level": "raid1", 00:24:41.965 "superblock": true, 00:24:41.965 "num_base_bdevs": 2, 00:24:41.966 "num_base_bdevs_discovered": 1, 00:24:41.966 "num_base_bdevs_operational": 1, 00:24:41.966 "base_bdevs_list": [ 00:24:41.966 { 00:24:41.966 "name": null, 00:24:41.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.966 "is_configured": false, 00:24:41.966 "data_offset": 2048, 00:24:41.966 "data_size": 63488 00:24:41.966 }, 00:24:41.966 { 00:24:41.966 "name": "BaseBdev2", 00:24:41.966 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:41.966 "is_configured": true, 00:24:41.966 "data_offset": 2048, 00:24:41.966 "data_size": 63488 00:24:41.966 } 00:24:41.966 ] 00:24:41.966 }' 00:24:41.966 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:41.966 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:42.232 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:42.232 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:42.232 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:42.232 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:42.232 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:42.232 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.232 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.493 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:42.493 "name": "raid_bdev1", 00:24:42.493 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:42.493 "strip_size_kb": 0, 00:24:42.493 "state": "online", 00:24:42.493 "raid_level": "raid1", 00:24:42.493 "superblock": true, 00:24:42.493 "num_base_bdevs": 2, 00:24:42.493 "num_base_bdevs_discovered": 1, 00:24:42.493 "num_base_bdevs_operational": 1, 00:24:42.493 "base_bdevs_list": [ 00:24:42.493 { 00:24:42.493 "name": null, 00:24:42.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.493 "is_configured": false, 00:24:42.493 "data_offset": 2048, 00:24:42.493 "data_size": 63488 00:24:42.493 }, 00:24:42.493 { 00:24:42.493 "name": "BaseBdev2", 00:24:42.493 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:42.493 "is_configured": true, 00:24:42.493 "data_offset": 2048, 00:24:42.493 "data_size": 63488 00:24:42.493 } 00:24:42.493 ] 00:24:42.493 }' 00:24:42.493 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:42.493 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:42.493 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:42.493 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:42.493 15:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:42.751 [2024-07-23 15:18:38.046228] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:42.751 [2024-07-23 15:18:38.078891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:24:42.751 [2024-07-23 15:18:38.081053] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:42.751 15:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:43.009 [2024-07-23 15:18:38.219068] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:43.268 [2024-07-23 15:18:38.444388] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:43.268 [2024-07-23 15:18:38.444666] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:43.526 [2024-07-23 15:18:38.799400] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:43.785 [2024-07-23 15:18:39.013628] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:43.785 [2024-07-23 15:18:39.013957] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:43.785 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.785 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:43.785 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:43.785 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:43.785 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:43.785 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.785 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:44.044 "name": "raid_bdev1", 00:24:44.044 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:44.044 "strip_size_kb": 0, 00:24:44.044 "state": "online", 00:24:44.044 "raid_level": "raid1", 00:24:44.044 "superblock": true, 00:24:44.044 "num_base_bdevs": 2, 00:24:44.044 "num_base_bdevs_discovered": 2, 00:24:44.044 "num_base_bdevs_operational": 2, 00:24:44.044 "process": { 00:24:44.044 "type": "rebuild", 00:24:44.044 "target": "spare", 00:24:44.044 "progress": { 00:24:44.044 "blocks": 12288, 00:24:44.044 "percent": 19 00:24:44.044 } 00:24:44.044 }, 00:24:44.044 "base_bdevs_list": [ 00:24:44.044 { 00:24:44.044 "name": "spare", 00:24:44.044 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:44.044 "is_configured": true, 00:24:44.044 "data_offset": 2048, 00:24:44.044 "data_size": 63488 00:24:44.044 }, 00:24:44.044 { 00:24:44.044 "name": "BaseBdev2", 00:24:44.044 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:44.044 "is_configured": true, 00:24:44.044 "data_offset": 2048, 00:24:44.044 "data_size": 63488 00:24:44.044 } 00:24:44.044 ] 00:24:44.044 }' 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:24:44.044 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=639 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.044 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.044 [2024-07-23 15:18:39.458343] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:44.044 [2024-07-23 15:18:39.458931] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:44.303 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:44.303 "name": "raid_bdev1", 00:24:44.303 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:44.303 "strip_size_kb": 0, 00:24:44.303 "state": "online", 00:24:44.303 "raid_level": "raid1", 00:24:44.303 "superblock": true, 00:24:44.303 "num_base_bdevs": 2, 00:24:44.303 "num_base_bdevs_discovered": 2, 00:24:44.303 "num_base_bdevs_operational": 2, 00:24:44.303 "process": { 00:24:44.303 "type": "rebuild", 00:24:44.303 "target": "spare", 00:24:44.303 "progress": { 00:24:44.303 "blocks": 16384, 00:24:44.303 "percent": 25 00:24:44.303 } 00:24:44.303 }, 00:24:44.303 "base_bdevs_list": [ 00:24:44.303 { 00:24:44.303 "name": "spare", 00:24:44.303 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:44.303 "is_configured": true, 00:24:44.303 "data_offset": 2048, 00:24:44.303 "data_size": 63488 00:24:44.303 }, 00:24:44.303 { 00:24:44.303 "name": "BaseBdev2", 00:24:44.303 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:44.303 "is_configured": true, 00:24:44.303 "data_offset": 2048, 00:24:44.303 "data_size": 63488 00:24:44.303 } 00:24:44.303 ] 00:24:44.303 }' 00:24:44.303 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:44.303 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.303 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:44.303 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.303 15:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:44.562 [2024-07-23 15:18:39.808121] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:44.562 [2024-07-23 15:18:39.808534] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:44.821 [2024-07-23 15:18:40.023125] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:44.821 [2024-07-23 15:18:40.023428] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:45.080 [2024-07-23 15:18:40.353105] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:45.080 [2024-07-23 15:18:40.485913] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:45.339 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:45.339 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.339 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:45.339 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:45.339 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:45.339 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:45.339 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.339 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.598 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.598 "name": "raid_bdev1", 00:24:45.598 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:45.598 "strip_size_kb": 0, 00:24:45.598 "state": "online", 00:24:45.598 "raid_level": "raid1", 00:24:45.598 "superblock": true, 00:24:45.598 "num_base_bdevs": 2, 00:24:45.598 "num_base_bdevs_discovered": 2, 00:24:45.598 "num_base_bdevs_operational": 2, 00:24:45.598 "process": { 00:24:45.598 "type": "rebuild", 00:24:45.598 "target": "spare", 00:24:45.598 "progress": { 00:24:45.598 "blocks": 30720, 00:24:45.598 "percent": 48 00:24:45.598 } 00:24:45.598 }, 00:24:45.598 "base_bdevs_list": [ 00:24:45.598 { 00:24:45.598 "name": "spare", 00:24:45.598 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:45.598 "is_configured": true, 00:24:45.598 "data_offset": 2048, 00:24:45.598 "data_size": 63488 00:24:45.598 }, 00:24:45.598 { 00:24:45.598 "name": "BaseBdev2", 00:24:45.598 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:45.598 "is_configured": true, 00:24:45.598 "data_offset": 2048, 00:24:45.598 "data_size": 63488 00:24:45.598 } 00:24:45.598 ] 00:24:45.598 }' 00:24:45.598 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.598 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.598 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.598 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.598 15:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:45.857 [2024-07-23 15:18:41.159850] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:46.116 [2024-07-23 15:18:41.367695] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:46.116 [2024-07-23 15:18:41.367934] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:46.684 "name": "raid_bdev1", 00:24:46.684 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:46.684 "strip_size_kb": 0, 00:24:46.684 "state": "online", 00:24:46.684 "raid_level": "raid1", 00:24:46.684 "superblock": true, 00:24:46.684 "num_base_bdevs": 2, 00:24:46.684 "num_base_bdevs_discovered": 2, 00:24:46.684 "num_base_bdevs_operational": 2, 00:24:46.684 "process": { 00:24:46.684 "type": "rebuild", 00:24:46.684 "target": "spare", 00:24:46.684 "progress": { 00:24:46.684 "blocks": 51200, 00:24:46.684 "percent": 80 00:24:46.684 } 00:24:46.684 }, 00:24:46.684 "base_bdevs_list": [ 00:24:46.684 { 00:24:46.684 "name": "spare", 00:24:46.684 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:46.684 "is_configured": true, 00:24:46.684 "data_offset": 2048, 00:24:46.684 "data_size": 63488 00:24:46.684 }, 00:24:46.684 { 00:24:46.684 "name": "BaseBdev2", 00:24:46.684 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:46.684 "is_configured": true, 00:24:46.684 "data_offset": 2048, 00:24:46.684 "data_size": 63488 00:24:46.684 } 00:24:46.684 ] 00:24:46.684 }' 00:24:46.684 15:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:46.684 15:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:46.684 15:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:46.684 15:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.684 15:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:24:46.684 [2024-07-23 15:18:42.051281] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:47.252 [2024-07-23 15:18:42.381703] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:24:47.511 [2024-07-23 15:18:42.704168] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:47.512 [2024-07-23 15:18:42.809957] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:47.512 [2024-07-23 15:18:42.812288] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.771 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:24:47.771 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.771 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:47.771 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:47.771 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:47.771 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:47.771 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.771 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:48.030 "name": "raid_bdev1", 00:24:48.030 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:48.030 "strip_size_kb": 0, 00:24:48.030 "state": "online", 00:24:48.030 "raid_level": "raid1", 00:24:48.030 "superblock": true, 00:24:48.030 "num_base_bdevs": 2, 00:24:48.030 "num_base_bdevs_discovered": 2, 00:24:48.030 "num_base_bdevs_operational": 2, 00:24:48.030 "base_bdevs_list": [ 00:24:48.030 { 00:24:48.030 "name": "spare", 00:24:48.030 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:48.030 "is_configured": true, 00:24:48.030 "data_offset": 2048, 00:24:48.030 "data_size": 63488 00:24:48.030 }, 00:24:48.030 { 00:24:48.030 "name": "BaseBdev2", 00:24:48.030 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:48.030 "is_configured": true, 00:24:48.030 "data_offset": 2048, 00:24:48.030 "data_size": 63488 00:24:48.030 } 00:24:48.030 ] 00:24:48.030 }' 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.030 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.288 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:48.288 "name": "raid_bdev1", 00:24:48.288 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:48.288 "strip_size_kb": 0, 00:24:48.288 "state": "online", 00:24:48.288 "raid_level": "raid1", 00:24:48.288 "superblock": true, 00:24:48.288 "num_base_bdevs": 2, 00:24:48.288 "num_base_bdevs_discovered": 2, 00:24:48.288 "num_base_bdevs_operational": 2, 00:24:48.288 "base_bdevs_list": [ 00:24:48.288 { 00:24:48.288 "name": "spare", 00:24:48.288 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:48.288 "is_configured": true, 00:24:48.288 "data_offset": 2048, 00:24:48.288 "data_size": 63488 00:24:48.288 }, 00:24:48.288 { 00:24:48.288 "name": "BaseBdev2", 00:24:48.288 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:48.288 "is_configured": true, 00:24:48.288 "data_offset": 2048, 00:24:48.288 "data_size": 63488 00:24:48.288 } 00:24:48.288 ] 00:24:48.288 }' 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.289 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.547 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.547 "name": "raid_bdev1", 00:24:48.547 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:48.547 "strip_size_kb": 0, 00:24:48.547 "state": "online", 00:24:48.547 "raid_level": "raid1", 00:24:48.547 "superblock": true, 00:24:48.547 "num_base_bdevs": 2, 00:24:48.547 "num_base_bdevs_discovered": 2, 00:24:48.547 "num_base_bdevs_operational": 2, 00:24:48.547 "base_bdevs_list": [ 00:24:48.547 { 00:24:48.547 "name": "spare", 00:24:48.547 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:48.547 "is_configured": true, 00:24:48.547 "data_offset": 2048, 00:24:48.547 "data_size": 63488 00:24:48.547 }, 00:24:48.547 { 00:24:48.547 "name": "BaseBdev2", 00:24:48.547 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:48.547 "is_configured": true, 00:24:48.547 "data_offset": 2048, 00:24:48.547 "data_size": 63488 00:24:48.547 } 00:24:48.547 ] 00:24:48.547 }' 00:24:48.547 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.547 15:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:48.806 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:49.063 [2024-07-23 15:18:44.266993] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:49.063 [2024-07-23 15:18:44.267238] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:49.063 00:24:49.063 Latency(us) 00:24:49.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.063 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:49.063 raid_bdev1 : 9.79 114.99 344.98 0.00 0.00 11482.30 278.92 113845.39 00:24:49.063 =================================================================================================================== 00:24:49.063 Total : 114.99 344.98 0.00 0.00 11482.30 278.92 113845.39 00:24:49.063 [2024-07-23 15:18:44.326460] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.063 [2024-07-23 15:18:44.326505] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:49.063 0 00:24:49.063 [2024-07-23 15:18:44.326591] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:49.063 [2024-07-23 15:18:44.326608] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:24:49.063 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:24:49.063 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.322 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:49.624 /dev/nbd0 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.624 1+0 records in 00:24:49.624 1+0 records out 00:24:49.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224091 s, 18.3 MB/s 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.624 15:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:49.624 /dev/nbd1 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.624 1+0 records in 00:24:49.624 1+0 records out 00:24:49.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224186 s, 18.3 MB/s 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:49.624 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:49.883 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:49.883 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.883 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:49.883 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:49.883 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:49.883 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.883 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.179 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:24:50.437 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:50.695 15:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:50.695 [2024-07-23 15:18:46.034756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:50.696 [2024-07-23 15:18:46.034853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.696 [2024-07-23 15:18:46.034886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:24:50.696 [2024-07-23 15:18:46.034901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.696 [2024-07-23 15:18:46.037334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.696 [2024-07-23 15:18:46.037377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:50.696 [2024-07-23 15:18:46.037460] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:50.696 [2024-07-23 15:18:46.037523] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:50.696 [2024-07-23 15:18:46.037650] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:50.696 spare 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.696 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.955 [2024-07-23 15:18:46.137769] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:24:50.955 [2024-07-23 15:18:46.137826] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:50.955 [2024-07-23 15:18:46.137987] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000027310 00:24:50.955 [2024-07-23 15:18:46.138409] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:24:50.955 [2024-07-23 15:18:46.138438] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:24:50.955 [2024-07-23 15:18:46.138557] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.955 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.955 "name": "raid_bdev1", 00:24:50.955 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:50.955 "strip_size_kb": 0, 00:24:50.955 "state": "online", 00:24:50.955 "raid_level": "raid1", 00:24:50.955 "superblock": true, 00:24:50.955 "num_base_bdevs": 2, 00:24:50.955 "num_base_bdevs_discovered": 2, 00:24:50.955 "num_base_bdevs_operational": 2, 00:24:50.955 "base_bdevs_list": [ 00:24:50.955 { 00:24:50.955 "name": "spare", 00:24:50.955 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:50.955 "is_configured": true, 00:24:50.955 "data_offset": 2048, 00:24:50.955 "data_size": 63488 00:24:50.955 }, 00:24:50.955 { 00:24:50.955 "name": "BaseBdev2", 00:24:50.955 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:50.955 "is_configured": true, 00:24:50.955 "data_offset": 2048, 00:24:50.955 "data_size": 63488 00:24:50.955 } 00:24:50.955 ] 00:24:50.955 }' 00:24:50.955 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.955 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.214 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:51.214 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:51.214 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:51.214 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:51.214 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:51.214 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.214 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.472 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.472 "name": "raid_bdev1", 00:24:51.472 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:51.472 "strip_size_kb": 0, 00:24:51.472 "state": "online", 00:24:51.472 "raid_level": "raid1", 00:24:51.473 "superblock": true, 00:24:51.473 "num_base_bdevs": 2, 00:24:51.473 "num_base_bdevs_discovered": 2, 00:24:51.473 "num_base_bdevs_operational": 2, 00:24:51.473 "base_bdevs_list": [ 00:24:51.473 { 00:24:51.473 "name": "spare", 00:24:51.473 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:51.473 "is_configured": true, 00:24:51.473 "data_offset": 2048, 00:24:51.473 "data_size": 63488 00:24:51.473 }, 00:24:51.473 { 00:24:51.473 "name": "BaseBdev2", 00:24:51.473 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:51.473 "is_configured": true, 00:24:51.473 "data_offset": 2048, 00:24:51.473 "data_size": 63488 00:24:51.473 } 00:24:51.473 ] 00:24:51.473 }' 00:24:51.473 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:51.473 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:51.473 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:51.473 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:51.473 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.473 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:51.733 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.733 15:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:51.733 [2024-07-23 15:18:47.103125] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.733 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.301 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.301 "name": "raid_bdev1", 00:24:52.301 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:52.301 "strip_size_kb": 0, 00:24:52.301 "state": "online", 00:24:52.301 "raid_level": "raid1", 00:24:52.301 "superblock": true, 00:24:52.301 "num_base_bdevs": 2, 00:24:52.301 "num_base_bdevs_discovered": 1, 00:24:52.301 "num_base_bdevs_operational": 1, 00:24:52.301 "base_bdevs_list": [ 00:24:52.301 { 00:24:52.301 "name": null, 00:24:52.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.301 "is_configured": false, 00:24:52.301 "data_offset": 2048, 00:24:52.301 "data_size": 63488 00:24:52.301 }, 00:24:52.301 { 00:24:52.301 "name": "BaseBdev2", 00:24:52.301 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:52.301 "is_configured": true, 00:24:52.301 "data_offset": 2048, 00:24:52.301 "data_size": 63488 00:24:52.301 } 00:24:52.301 ] 00:24:52.301 }' 00:24:52.301 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.301 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.301 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:52.559 [2024-07-23 15:18:47.943396] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:52.559 [2024-07-23 15:18:47.943612] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:52.559 [2024-07-23 15:18:47.943637] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:52.559 [2024-07-23 15:18:47.943683] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:52.559 [2024-07-23 15:18:47.948425] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000273e0 00:24:52.559 [2024-07-23 15:18:47.950720] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:52.559 15:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:24:53.935 15:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.935 15:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:53.935 15:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:53.935 15:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:53.935 15:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:53.935 15:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.935 15:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.935 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:53.935 "name": "raid_bdev1", 00:24:53.935 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:53.935 "strip_size_kb": 0, 00:24:53.935 "state": "online", 00:24:53.935 "raid_level": "raid1", 00:24:53.935 "superblock": true, 00:24:53.935 "num_base_bdevs": 2, 00:24:53.935 "num_base_bdevs_discovered": 2, 00:24:53.935 "num_base_bdevs_operational": 2, 00:24:53.935 "process": { 00:24:53.935 "type": "rebuild", 00:24:53.935 "target": "spare", 00:24:53.935 "progress": { 00:24:53.935 "blocks": 24576, 00:24:53.935 "percent": 38 00:24:53.935 } 00:24:53.935 }, 00:24:53.935 "base_bdevs_list": [ 00:24:53.935 { 00:24:53.935 "name": "spare", 00:24:53.935 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:53.935 "is_configured": true, 00:24:53.935 "data_offset": 2048, 00:24:53.935 "data_size": 63488 00:24:53.935 }, 00:24:53.935 { 00:24:53.935 "name": "BaseBdev2", 00:24:53.935 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:53.935 "is_configured": true, 00:24:53.935 "data_offset": 2048, 00:24:53.935 "data_size": 63488 00:24:53.935 } 00:24:53.935 ] 00:24:53.935 }' 00:24:53.935 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:53.935 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.935 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:53.935 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.935 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:54.194 [2024-07-23 15:18:49.440969] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.194 [2024-07-23 15:18:49.459624] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:54.194 [2024-07-23 15:18:49.459691] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.194 [2024-07-23 15:18:49.459712] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.194 [2024-07-23 15:18:49.459721] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.194 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.452 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.452 "name": "raid_bdev1", 00:24:54.452 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:54.452 "strip_size_kb": 0, 00:24:54.452 "state": "online", 00:24:54.452 "raid_level": "raid1", 00:24:54.452 "superblock": true, 00:24:54.452 "num_base_bdevs": 2, 00:24:54.452 "num_base_bdevs_discovered": 1, 00:24:54.452 "num_base_bdevs_operational": 1, 00:24:54.452 "base_bdevs_list": [ 00:24:54.452 { 00:24:54.452 "name": null, 00:24:54.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.452 "is_configured": false, 00:24:54.452 "data_offset": 2048, 00:24:54.452 "data_size": 63488 00:24:54.452 }, 00:24:54.452 { 00:24:54.452 "name": "BaseBdev2", 00:24:54.452 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:54.452 "is_configured": true, 00:24:54.452 "data_offset": 2048, 00:24:54.452 "data_size": 63488 00:24:54.452 } 00:24:54.452 ] 00:24:54.452 }' 00:24:54.452 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.452 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.711 15:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:54.969 [2024-07-23 15:18:50.153110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:54.969 [2024-07-23 15:18:50.153194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.969 [2024-07-23 15:18:50.153235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:24:54.969 [2024-07-23 15:18:50.153257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.969 [2024-07-23 15:18:50.153745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.969 [2024-07-23 15:18:50.153778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:54.969 [2024-07-23 15:18:50.153890] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:54.969 [2024-07-23 15:18:50.153905] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:54.969 [2024-07-23 15:18:50.153921] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:54.969 [2024-07-23 15:18:50.153945] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:54.969 [2024-07-23 15:18:50.158622] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000274b0 00:24:54.969 spare 00:24:54.969 [2024-07-23 15:18:50.160838] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:54.969 15:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:24:55.904 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.904 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:55.904 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:55.904 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:55.904 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:55.904 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.904 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.163 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:56.163 "name": "raid_bdev1", 00:24:56.163 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:56.163 "strip_size_kb": 0, 00:24:56.163 "state": "online", 00:24:56.163 "raid_level": "raid1", 00:24:56.163 "superblock": true, 00:24:56.163 "num_base_bdevs": 2, 00:24:56.163 "num_base_bdevs_discovered": 2, 00:24:56.163 "num_base_bdevs_operational": 2, 00:24:56.163 "process": { 00:24:56.163 "type": "rebuild", 00:24:56.163 "target": "spare", 00:24:56.163 "progress": { 00:24:56.163 "blocks": 24576, 00:24:56.163 "percent": 38 00:24:56.163 } 00:24:56.163 }, 00:24:56.163 "base_bdevs_list": [ 00:24:56.163 { 00:24:56.163 "name": "spare", 00:24:56.163 "uuid": "e16ba91d-004f-55ec-9c04-2c1e644ebadf", 00:24:56.163 "is_configured": true, 00:24:56.163 "data_offset": 2048, 00:24:56.163 "data_size": 63488 00:24:56.163 }, 00:24:56.163 { 00:24:56.163 "name": "BaseBdev2", 00:24:56.163 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:56.163 "is_configured": true, 00:24:56.163 "data_offset": 2048, 00:24:56.163 "data_size": 63488 00:24:56.163 } 00:24:56.163 ] 00:24:56.163 }' 00:24:56.163 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:56.163 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.163 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:56.163 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.163 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:56.423 [2024-07-23 15:18:51.687093] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.424 [2024-07-23 15:18:51.770290] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:56.424 [2024-07-23 15:18:51.770364] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.424 [2024-07-23 15:18:51.770380] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.424 [2024-07-23 15:18:51.770392] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.424 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.682 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:56.682 "name": "raid_bdev1", 00:24:56.682 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:56.682 "strip_size_kb": 0, 00:24:56.682 "state": "online", 00:24:56.682 "raid_level": "raid1", 00:24:56.682 "superblock": true, 00:24:56.682 "num_base_bdevs": 2, 00:24:56.682 "num_base_bdevs_discovered": 1, 00:24:56.682 "num_base_bdevs_operational": 1, 00:24:56.682 "base_bdevs_list": [ 00:24:56.682 { 00:24:56.682 "name": null, 00:24:56.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.682 "is_configured": false, 00:24:56.682 "data_offset": 2048, 00:24:56.682 "data_size": 63488 00:24:56.682 }, 00:24:56.682 { 00:24:56.682 "name": "BaseBdev2", 00:24:56.682 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:56.682 "is_configured": true, 00:24:56.682 "data_offset": 2048, 00:24:56.682 "data_size": 63488 00:24:56.682 } 00:24:56.682 ] 00:24:56.682 }' 00:24:56.682 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:56.682 15:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.940 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:56.940 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:56.940 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:56.940 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:56.940 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:56.940 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.940 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.199 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.199 "name": "raid_bdev1", 00:24:57.199 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:57.199 "strip_size_kb": 0, 00:24:57.199 "state": "online", 00:24:57.199 "raid_level": "raid1", 00:24:57.199 "superblock": true, 00:24:57.199 "num_base_bdevs": 2, 00:24:57.199 "num_base_bdevs_discovered": 1, 00:24:57.199 "num_base_bdevs_operational": 1, 00:24:57.199 "base_bdevs_list": [ 00:24:57.199 { 00:24:57.199 "name": null, 00:24:57.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.199 "is_configured": false, 00:24:57.199 "data_offset": 2048, 00:24:57.199 "data_size": 63488 00:24:57.199 }, 00:24:57.199 { 00:24:57.199 "name": "BaseBdev2", 00:24:57.199 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:57.199 "is_configured": true, 00:24:57.199 "data_offset": 2048, 00:24:57.199 "data_size": 63488 00:24:57.199 } 00:24:57.199 ] 00:24:57.199 }' 00:24:57.199 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:57.199 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:57.199 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:57.199 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:57.199 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:57.458 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:57.716 [2024-07-23 15:18:52.968064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:57.716 [2024-07-23 15:18:52.968148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.716 [2024-07-23 15:18:52.968180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:24:57.716 [2024-07-23 15:18:52.968195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.716 [2024-07-23 15:18:52.968627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.716 [2024-07-23 15:18:52.968660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:57.716 [2024-07-23 15:18:52.968740] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:57.716 [2024-07-23 15:18:52.968758] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:57.716 [2024-07-23 15:18:52.968768] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:57.716 BaseBdev1 00:24:57.716 15:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:58.650 15:18:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:58.650 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.650 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.910 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.910 "name": "raid_bdev1", 00:24:58.910 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:58.910 "strip_size_kb": 0, 00:24:58.910 "state": "online", 00:24:58.910 "raid_level": "raid1", 00:24:58.910 "superblock": true, 00:24:58.910 "num_base_bdevs": 2, 00:24:58.910 "num_base_bdevs_discovered": 1, 00:24:58.910 "num_base_bdevs_operational": 1, 00:24:58.910 "base_bdevs_list": [ 00:24:58.910 { 00:24:58.910 "name": null, 00:24:58.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.910 "is_configured": false, 00:24:58.910 "data_offset": 2048, 00:24:58.910 "data_size": 63488 00:24:58.910 }, 00:24:58.910 { 00:24:58.910 "name": "BaseBdev2", 00:24:58.910 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:58.910 "is_configured": true, 00:24:58.910 "data_offset": 2048, 00:24:58.910 "data_size": 63488 00:24:58.910 } 00:24:58.910 ] 00:24:58.910 }' 00:24:58.910 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.910 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:59.168 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:59.168 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:59.168 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:59.168 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:59.168 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:59.168 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.168 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.426 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:59.426 "name": "raid_bdev1", 00:24:59.426 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:24:59.426 "strip_size_kb": 0, 00:24:59.426 "state": "online", 00:24:59.426 "raid_level": "raid1", 00:24:59.426 "superblock": true, 00:24:59.426 "num_base_bdevs": 2, 00:24:59.426 "num_base_bdevs_discovered": 1, 00:24:59.426 "num_base_bdevs_operational": 1, 00:24:59.426 "base_bdevs_list": [ 00:24:59.426 { 00:24:59.426 "name": null, 00:24:59.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.426 "is_configured": false, 00:24:59.426 "data_offset": 2048, 00:24:59.426 "data_size": 63488 00:24:59.426 }, 00:24:59.426 { 00:24:59.426 "name": "BaseBdev2", 00:24:59.426 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:24:59.426 "is_configured": true, 00:24:59.426 "data_offset": 2048, 00:24:59.426 "data_size": 63488 00:24:59.426 } 00:24:59.426 ] 00:24:59.426 }' 00:24:59.426 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:59.426 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:59.426 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:59.685 15:18:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:59.685 [2024-07-23 15:18:55.096756] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:59.685 [2024-07-23 15:18:55.096966] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:59.685 [2024-07-23 15:18:55.096990] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:59.685 request: 00:24:59.685 { 00:24:59.685 "base_bdev": "BaseBdev1", 00:24:59.685 "raid_bdev": "raid_bdev1", 00:24:59.685 "method": "bdev_raid_add_base_bdev", 00:24:59.685 "req_id": 1 00:24:59.685 } 00:24:59.685 Got JSON-RPC error response 00:24:59.685 response: 00:24:59.685 { 00:24:59.685 "code": -22, 00:24:59.685 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:59.685 } 00:24:59.943 15:18:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:24:59.943 15:18:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:59.943 15:18:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:59.943 15:18:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:59.943 15:18:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:00.876 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:00.876 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.876 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.876 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:00.876 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:00.876 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:00.876 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.877 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.877 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.877 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.877 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.877 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.135 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.135 "name": "raid_bdev1", 00:25:01.135 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:25:01.135 "strip_size_kb": 0, 00:25:01.135 "state": "online", 00:25:01.135 "raid_level": "raid1", 00:25:01.135 "superblock": true, 00:25:01.135 "num_base_bdevs": 2, 00:25:01.135 "num_base_bdevs_discovered": 1, 00:25:01.135 "num_base_bdevs_operational": 1, 00:25:01.135 "base_bdevs_list": [ 00:25:01.135 { 00:25:01.135 "name": null, 00:25:01.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.135 "is_configured": false, 00:25:01.135 "data_offset": 2048, 00:25:01.135 "data_size": 63488 00:25:01.135 }, 00:25:01.135 { 00:25:01.135 "name": "BaseBdev2", 00:25:01.135 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:25:01.135 "is_configured": true, 00:25:01.135 "data_offset": 2048, 00:25:01.135 "data_size": 63488 00:25:01.135 } 00:25:01.135 ] 00:25:01.135 }' 00:25:01.135 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.135 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.393 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.393 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:01.393 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:01.393 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:01.393 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:01.393 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.393 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.651 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:01.651 "name": "raid_bdev1", 00:25:01.652 "uuid": "2d4a930a-5ae4-47ca-a78f-c597ba42f5c1", 00:25:01.652 "strip_size_kb": 0, 00:25:01.652 "state": "online", 00:25:01.652 "raid_level": "raid1", 00:25:01.652 "superblock": true, 00:25:01.652 "num_base_bdevs": 2, 00:25:01.652 "num_base_bdevs_discovered": 1, 00:25:01.652 "num_base_bdevs_operational": 1, 00:25:01.652 "base_bdevs_list": [ 00:25:01.652 { 00:25:01.652 "name": null, 00:25:01.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.652 "is_configured": false, 00:25:01.652 "data_offset": 2048, 00:25:01.652 "data_size": 63488 00:25:01.652 }, 00:25:01.652 { 00:25:01.652 "name": "BaseBdev2", 00:25:01.652 "uuid": "e29db048-dff4-5bee-9fdd-2bb6072857d4", 00:25:01.652 "is_configured": true, 00:25:01.652 "data_offset": 2048, 00:25:01.652 "data_size": 63488 00:25:01.652 } 00:25:01.652 ] 00:25:01.652 }' 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 109097 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 109097 ']' 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 109097 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.652 15:18:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109097 00:25:01.652 killing process with pid 109097 00:25:01.652 Received shutdown signal, test time was about 22.492803 seconds 00:25:01.652 00:25:01.652 Latency(us) 00:25:01.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.652 =================================================================================================================== 00:25:01.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.652 15:18:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:01.652 15:18:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:01.652 15:18:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109097' 00:25:01.652 15:18:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 109097 00:25:01.652 15:18:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 109097 00:25:01.652 [2024-07-23 15:18:57.024043] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:01.652 [2024-07-23 15:18:57.024177] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.652 [2024-07-23 15:18:57.024248] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.652 [2024-07-23 15:18:57.024269] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:25:01.652 [2024-07-23 15:18:57.051205] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:01.910 ************************************ 00:25:01.910 END TEST raid_rebuild_test_sb_io 00:25:01.910 ************************************ 00:25:01.910 15:18:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:25:01.910 00:25:01.910 real 0m26.231s 00:25:01.910 user 0m38.197s 00:25:01.910 sys 0m4.078s 00:25:01.910 15:18:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:01.910 15:18:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.168 15:18:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:02.168 15:18:57 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:25:02.168 15:18:57 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:25:02.168 15:18:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:02.168 15:18:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.168 15:18:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:02.168 ************************************ 00:25:02.168 START TEST raid_rebuild_test 00:25:02.168 ************************************ 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=109879 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 109879 /var/tmp/spdk-raid.sock 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 109879 ']' 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.168 15:18:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.168 [2024-07-23 15:18:57.446364] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:25:02.168 [2024-07-23 15:18:57.446588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109879 ] 00:25:02.168 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:02.168 Zero copy mechanism will not be used. 00:25:02.427 [2024-07-23 15:18:57.598650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.427 [2024-07-23 15:18:57.642889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.427 [2024-07-23 15:18:57.687775] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.993 15:18:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.993 15:18:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:25:02.993 15:18:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:02.993 15:18:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:02.993 BaseBdev1_malloc 00:25:02.993 15:18:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:03.251 [2024-07-23 15:18:58.647166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:03.251 [2024-07-23 15:18:58.647261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.251 [2024-07-23 15:18:58.647297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:25:03.251 [2024-07-23 15:18:58.647318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.251 [2024-07-23 15:18:58.649914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.251 [2024-07-23 15:18:58.649960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:03.251 BaseBdev1 00:25:03.251 15:18:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:03.251 15:18:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:03.509 BaseBdev2_malloc 00:25:03.509 15:18:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:03.767 [2024-07-23 15:18:59.004708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:03.767 [2024-07-23 15:18:59.004802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.767 [2024-07-23 15:18:59.004834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:25:03.767 [2024-07-23 15:18:59.004848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.767 [2024-07-23 15:18:59.007313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.767 [2024-07-23 15:18:59.007355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:03.767 BaseBdev2 00:25:03.767 15:18:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:03.767 15:18:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:03.767 BaseBdev3_malloc 00:25:04.025 15:18:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:04.025 [2024-07-23 15:18:59.421689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:04.025 [2024-07-23 15:18:59.421785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.025 [2024-07-23 15:18:59.421833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:25:04.025 [2024-07-23 15:18:59.421847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.025 [2024-07-23 15:18:59.424446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.025 [2024-07-23 15:18:59.424487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:04.025 BaseBdev3 00:25:04.025 15:18:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:04.025 15:18:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:04.283 BaseBdev4_malloc 00:25:04.284 15:18:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:04.542 [2024-07-23 15:18:59.855267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:04.542 [2024-07-23 15:18:59.855349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.542 [2024-07-23 15:18:59.855381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:25:04.542 [2024-07-23 15:18:59.855394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.542 [2024-07-23 15:18:59.857821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.542 [2024-07-23 15:18:59.857861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:04.542 BaseBdev4 00:25:04.542 15:18:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:04.799 spare_malloc 00:25:04.799 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:04.799 spare_delay 00:25:05.056 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:05.056 [2024-07-23 15:19:00.397061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:05.056 [2024-07-23 15:19:00.397161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.056 [2024-07-23 15:19:00.397197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:25:05.056 [2024-07-23 15:19:00.397209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.056 [2024-07-23 15:19:00.399690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.056 [2024-07-23 15:19:00.399736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:05.056 spare 00:25:05.056 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:05.314 [2024-07-23 15:19:00.621169] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.314 [2024-07-23 15:19:00.623347] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:05.314 [2024-07-23 15:19:00.623418] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:05.314 [2024-07-23 15:19:00.623460] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:05.314 [2024-07-23 15:19:00.623548] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:25:05.314 [2024-07-23 15:19:00.623559] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:05.314 [2024-07-23 15:19:00.623698] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:25:05.314 [2024-07-23 15:19:00.624063] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:25:05.314 [2024-07-23 15:19:00.624093] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:25:05.314 [2024-07-23 15:19:00.624244] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.314 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.572 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.572 "name": "raid_bdev1", 00:25:05.572 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:05.572 "strip_size_kb": 0, 00:25:05.572 "state": "online", 00:25:05.572 "raid_level": "raid1", 00:25:05.572 "superblock": false, 00:25:05.572 "num_base_bdevs": 4, 00:25:05.572 "num_base_bdevs_discovered": 4, 00:25:05.572 "num_base_bdevs_operational": 4, 00:25:05.572 "base_bdevs_list": [ 00:25:05.572 { 00:25:05.572 "name": "BaseBdev1", 00:25:05.572 "uuid": "7764cb97-6791-5562-ae7f-74087f180b15", 00:25:05.572 "is_configured": true, 00:25:05.572 "data_offset": 0, 00:25:05.572 "data_size": 65536 00:25:05.572 }, 00:25:05.572 { 00:25:05.572 "name": "BaseBdev2", 00:25:05.572 "uuid": "ec83bdf7-9954-5730-b816-e107b0d32473", 00:25:05.572 "is_configured": true, 00:25:05.572 "data_offset": 0, 00:25:05.572 "data_size": 65536 00:25:05.572 }, 00:25:05.572 { 00:25:05.572 "name": "BaseBdev3", 00:25:05.572 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:05.572 "is_configured": true, 00:25:05.572 "data_offset": 0, 00:25:05.572 "data_size": 65536 00:25:05.572 }, 00:25:05.572 { 00:25:05.572 "name": "BaseBdev4", 00:25:05.572 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:05.572 "is_configured": true, 00:25:05.572 "data_offset": 0, 00:25:05.572 "data_size": 65536 00:25:05.572 } 00:25:05.572 ] 00:25:05.572 }' 00:25:05.572 15:19:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.572 15:19:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.830 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:05.830 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:06.088 [2024-07-23 15:19:01.321610] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:06.088 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:25:06.088 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:06.089 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:06.347 [2024-07-23 15:19:01.685439] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:25:06.347 /dev/nbd0 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:06.347 1+0 records in 00:25:06.347 1+0 records out 00:25:06.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181418 s, 22.6 MB/s 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:06.347 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:25:06.348 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:06.348 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:06.348 15:19:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:25:06.348 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:06.348 15:19:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.348 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:25:06.348 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:25:06.348 15:19:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:25:12.940 65536+0 records in 00:25:12.940 65536+0 records out 00:25:12.940 33554432 bytes (34 MB, 32 MiB) copied, 5.9988 s, 5.6 MB/s 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:12.940 [2024-07-23 15:19:07.962928] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:12.940 15:19:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:12.941 15:19:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:12.941 [2024-07-23 15:19:08.187370] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.941 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.199 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.199 "name": "raid_bdev1", 00:25:13.199 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:13.199 "strip_size_kb": 0, 00:25:13.199 "state": "online", 00:25:13.199 "raid_level": "raid1", 00:25:13.199 "superblock": false, 00:25:13.199 "num_base_bdevs": 4, 00:25:13.199 "num_base_bdevs_discovered": 3, 00:25:13.199 "num_base_bdevs_operational": 3, 00:25:13.199 "base_bdevs_list": [ 00:25:13.199 { 00:25:13.199 "name": null, 00:25:13.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.199 "is_configured": false, 00:25:13.199 "data_offset": 0, 00:25:13.199 "data_size": 65536 00:25:13.199 }, 00:25:13.199 { 00:25:13.199 "name": "BaseBdev2", 00:25:13.199 "uuid": "ec83bdf7-9954-5730-b816-e107b0d32473", 00:25:13.199 "is_configured": true, 00:25:13.199 "data_offset": 0, 00:25:13.199 "data_size": 65536 00:25:13.199 }, 00:25:13.199 { 00:25:13.199 "name": "BaseBdev3", 00:25:13.199 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:13.199 "is_configured": true, 00:25:13.199 "data_offset": 0, 00:25:13.199 "data_size": 65536 00:25:13.199 }, 00:25:13.199 { 00:25:13.199 "name": "BaseBdev4", 00:25:13.199 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:13.199 "is_configured": true, 00:25:13.199 "data_offset": 0, 00:25:13.199 "data_size": 65536 00:25:13.199 } 00:25:13.199 ] 00:25:13.199 }' 00:25:13.199 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.199 15:19:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.456 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:13.456 [2024-07-23 15:19:08.815520] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:13.456 [2024-07-23 15:19:08.819127] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d05fb0 00:25:13.456 [2024-07-23 15:19:08.821340] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:13.456 15:19:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:14.824 15:19:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:14.824 15:19:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:14.824 15:19:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:14.824 15:19:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:14.824 15:19:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:14.824 15:19:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.824 15:19:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.824 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:14.824 "name": "raid_bdev1", 00:25:14.824 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:14.824 "strip_size_kb": 0, 00:25:14.824 "state": "online", 00:25:14.824 "raid_level": "raid1", 00:25:14.824 "superblock": false, 00:25:14.824 "num_base_bdevs": 4, 00:25:14.824 "num_base_bdevs_discovered": 4, 00:25:14.824 "num_base_bdevs_operational": 4, 00:25:14.824 "process": { 00:25:14.824 "type": "rebuild", 00:25:14.824 "target": "spare", 00:25:14.824 "progress": { 00:25:14.824 "blocks": 24576, 00:25:14.824 "percent": 37 00:25:14.824 } 00:25:14.824 }, 00:25:14.824 "base_bdevs_list": [ 00:25:14.824 { 00:25:14.824 "name": "spare", 00:25:14.824 "uuid": "261b8ec8-52b9-5872-a3c5-069593a55ddc", 00:25:14.824 "is_configured": true, 00:25:14.824 "data_offset": 0, 00:25:14.824 "data_size": 65536 00:25:14.824 }, 00:25:14.824 { 00:25:14.824 "name": "BaseBdev2", 00:25:14.824 "uuid": "ec83bdf7-9954-5730-b816-e107b0d32473", 00:25:14.824 "is_configured": true, 00:25:14.824 "data_offset": 0, 00:25:14.824 "data_size": 65536 00:25:14.824 }, 00:25:14.824 { 00:25:14.824 "name": "BaseBdev3", 00:25:14.824 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:14.824 "is_configured": true, 00:25:14.824 "data_offset": 0, 00:25:14.824 "data_size": 65536 00:25:14.824 }, 00:25:14.824 { 00:25:14.824 "name": "BaseBdev4", 00:25:14.824 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:14.824 "is_configured": true, 00:25:14.824 "data_offset": 0, 00:25:14.824 "data_size": 65536 00:25:14.824 } 00:25:14.824 ] 00:25:14.824 }' 00:25:14.824 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:14.824 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:14.824 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:14.824 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:14.824 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:15.081 [2024-07-23 15:19:10.342688] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:15.082 [2024-07-23 15:19:10.431989] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:15.082 [2024-07-23 15:19:10.432069] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.082 [2024-07-23 15:19:10.432090] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:15.082 [2024-07-23 15:19:10.432100] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.082 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.338 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.338 "name": "raid_bdev1", 00:25:15.338 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:15.338 "strip_size_kb": 0, 00:25:15.338 "state": "online", 00:25:15.338 "raid_level": "raid1", 00:25:15.338 "superblock": false, 00:25:15.338 "num_base_bdevs": 4, 00:25:15.338 "num_base_bdevs_discovered": 3, 00:25:15.338 "num_base_bdevs_operational": 3, 00:25:15.338 "base_bdevs_list": [ 00:25:15.338 { 00:25:15.338 "name": null, 00:25:15.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.338 "is_configured": false, 00:25:15.338 "data_offset": 0, 00:25:15.338 "data_size": 65536 00:25:15.338 }, 00:25:15.338 { 00:25:15.338 "name": "BaseBdev2", 00:25:15.338 "uuid": "ec83bdf7-9954-5730-b816-e107b0d32473", 00:25:15.338 "is_configured": true, 00:25:15.338 "data_offset": 0, 00:25:15.338 "data_size": 65536 00:25:15.338 }, 00:25:15.338 { 00:25:15.338 "name": "BaseBdev3", 00:25:15.338 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:15.338 "is_configured": true, 00:25:15.338 "data_offset": 0, 00:25:15.338 "data_size": 65536 00:25:15.338 }, 00:25:15.338 { 00:25:15.338 "name": "BaseBdev4", 00:25:15.338 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:15.338 "is_configured": true, 00:25:15.338 "data_offset": 0, 00:25:15.338 "data_size": 65536 00:25:15.338 } 00:25:15.338 ] 00:25:15.338 }' 00:25:15.338 15:19:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.338 15:19:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.595 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:15.595 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:15.595 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:15.595 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:15.595 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:15.595 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.595 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.853 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.853 "name": "raid_bdev1", 00:25:15.853 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:15.853 "strip_size_kb": 0, 00:25:15.853 "state": "online", 00:25:15.853 "raid_level": "raid1", 00:25:15.853 "superblock": false, 00:25:15.853 "num_base_bdevs": 4, 00:25:15.853 "num_base_bdevs_discovered": 3, 00:25:15.853 "num_base_bdevs_operational": 3, 00:25:15.853 "base_bdevs_list": [ 00:25:15.853 { 00:25:15.853 "name": null, 00:25:15.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.853 "is_configured": false, 00:25:15.853 "data_offset": 0, 00:25:15.853 "data_size": 65536 00:25:15.853 }, 00:25:15.853 { 00:25:15.853 "name": "BaseBdev2", 00:25:15.853 "uuid": "ec83bdf7-9954-5730-b816-e107b0d32473", 00:25:15.853 "is_configured": true, 00:25:15.853 "data_offset": 0, 00:25:15.853 "data_size": 65536 00:25:15.853 }, 00:25:15.853 { 00:25:15.853 "name": "BaseBdev3", 00:25:15.853 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:15.853 "is_configured": true, 00:25:15.853 "data_offset": 0, 00:25:15.853 "data_size": 65536 00:25:15.853 }, 00:25:15.853 { 00:25:15.853 "name": "BaseBdev4", 00:25:15.853 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:15.853 "is_configured": true, 00:25:15.853 "data_offset": 0, 00:25:15.853 "data_size": 65536 00:25:15.853 } 00:25:15.853 ] 00:25:15.853 }' 00:25:15.853 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:15.853 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:15.853 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:15.853 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:15.853 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:16.111 [2024-07-23 15:19:11.444699] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:16.111 [2024-07-23 15:19:11.448234] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d06080 00:25:16.111 [2024-07-23 15:19:11.450402] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:16.111 15:19:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:17.042 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.042 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:17.042 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:17.042 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:17.042 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:17.300 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.300 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.300 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:17.300 "name": "raid_bdev1", 00:25:17.300 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:17.300 "strip_size_kb": 0, 00:25:17.300 "state": "online", 00:25:17.301 "raid_level": "raid1", 00:25:17.301 "superblock": false, 00:25:17.301 "num_base_bdevs": 4, 00:25:17.301 "num_base_bdevs_discovered": 4, 00:25:17.301 "num_base_bdevs_operational": 4, 00:25:17.301 "process": { 00:25:17.301 "type": "rebuild", 00:25:17.301 "target": "spare", 00:25:17.301 "progress": { 00:25:17.301 "blocks": 24576, 00:25:17.301 "percent": 37 00:25:17.301 } 00:25:17.301 }, 00:25:17.301 "base_bdevs_list": [ 00:25:17.301 { 00:25:17.301 "name": "spare", 00:25:17.301 "uuid": "261b8ec8-52b9-5872-a3c5-069593a55ddc", 00:25:17.301 "is_configured": true, 00:25:17.301 "data_offset": 0, 00:25:17.301 "data_size": 65536 00:25:17.301 }, 00:25:17.301 { 00:25:17.301 "name": "BaseBdev2", 00:25:17.301 "uuid": "ec83bdf7-9954-5730-b816-e107b0d32473", 00:25:17.301 "is_configured": true, 00:25:17.301 "data_offset": 0, 00:25:17.301 "data_size": 65536 00:25:17.301 }, 00:25:17.301 { 00:25:17.301 "name": "BaseBdev3", 00:25:17.301 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:17.301 "is_configured": true, 00:25:17.301 "data_offset": 0, 00:25:17.301 "data_size": 65536 00:25:17.301 }, 00:25:17.301 { 00:25:17.301 "name": "BaseBdev4", 00:25:17.301 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:17.301 "is_configured": true, 00:25:17.301 "data_offset": 0, 00:25:17.301 "data_size": 65536 00:25:17.301 } 00:25:17.301 ] 00:25:17.301 }' 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:25:17.301 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:17.559 [2024-07-23 15:19:12.935460] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:17.559 [2024-07-23 15:19:12.959112] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000d06080 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.559 15:19:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:17.818 "name": "raid_bdev1", 00:25:17.818 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:17.818 "strip_size_kb": 0, 00:25:17.818 "state": "online", 00:25:17.818 "raid_level": "raid1", 00:25:17.818 "superblock": false, 00:25:17.818 "num_base_bdevs": 4, 00:25:17.818 "num_base_bdevs_discovered": 3, 00:25:17.818 "num_base_bdevs_operational": 3, 00:25:17.818 "process": { 00:25:17.818 "type": "rebuild", 00:25:17.818 "target": "spare", 00:25:17.818 "progress": { 00:25:17.818 "blocks": 34816, 00:25:17.818 "percent": 53 00:25:17.818 } 00:25:17.818 }, 00:25:17.818 "base_bdevs_list": [ 00:25:17.818 { 00:25:17.818 "name": "spare", 00:25:17.818 "uuid": "261b8ec8-52b9-5872-a3c5-069593a55ddc", 00:25:17.818 "is_configured": true, 00:25:17.818 "data_offset": 0, 00:25:17.818 "data_size": 65536 00:25:17.818 }, 00:25:17.818 { 00:25:17.818 "name": null, 00:25:17.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.818 "is_configured": false, 00:25:17.818 "data_offset": 0, 00:25:17.818 "data_size": 65536 00:25:17.818 }, 00:25:17.818 { 00:25:17.818 "name": "BaseBdev3", 00:25:17.818 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:17.818 "is_configured": true, 00:25:17.818 "data_offset": 0, 00:25:17.818 "data_size": 65536 00:25:17.818 }, 00:25:17.818 { 00:25:17.818 "name": "BaseBdev4", 00:25:17.818 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:17.818 "is_configured": true, 00:25:17.818 "data_offset": 0, 00:25:17.818 "data_size": 65536 00:25:17.818 } 00:25:17.818 ] 00:25:17.818 }' 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=673 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.818 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.077 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:18.077 "name": "raid_bdev1", 00:25:18.077 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:18.077 "strip_size_kb": 0, 00:25:18.077 "state": "online", 00:25:18.077 "raid_level": "raid1", 00:25:18.077 "superblock": false, 00:25:18.077 "num_base_bdevs": 4, 00:25:18.077 "num_base_bdevs_discovered": 3, 00:25:18.077 "num_base_bdevs_operational": 3, 00:25:18.077 "process": { 00:25:18.077 "type": "rebuild", 00:25:18.077 "target": "spare", 00:25:18.077 "progress": { 00:25:18.077 "blocks": 38912, 00:25:18.077 "percent": 59 00:25:18.077 } 00:25:18.077 }, 00:25:18.077 "base_bdevs_list": [ 00:25:18.077 { 00:25:18.077 "name": "spare", 00:25:18.077 "uuid": "261b8ec8-52b9-5872-a3c5-069593a55ddc", 00:25:18.077 "is_configured": true, 00:25:18.077 "data_offset": 0, 00:25:18.077 "data_size": 65536 00:25:18.077 }, 00:25:18.077 { 00:25:18.077 "name": null, 00:25:18.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.077 "is_configured": false, 00:25:18.077 "data_offset": 0, 00:25:18.077 "data_size": 65536 00:25:18.077 }, 00:25:18.077 { 00:25:18.077 "name": "BaseBdev3", 00:25:18.077 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:18.077 "is_configured": true, 00:25:18.077 "data_offset": 0, 00:25:18.077 "data_size": 65536 00:25:18.077 }, 00:25:18.077 { 00:25:18.077 "name": "BaseBdev4", 00:25:18.077 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:18.077 "is_configured": true, 00:25:18.077 "data_offset": 0, 00:25:18.077 "data_size": 65536 00:25:18.077 } 00:25:18.077 ] 00:25:18.077 }' 00:25:18.077 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:18.077 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.077 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:18.077 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.077 15:19:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.450 [2024-07-23 15:19:14.669487] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:19.450 [2024-07-23 15:19:14.669578] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:19.450 [2024-07-23 15:19:14.669622] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.450 "name": "raid_bdev1", 00:25:19.450 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:19.450 "strip_size_kb": 0, 00:25:19.450 "state": "online", 00:25:19.450 "raid_level": "raid1", 00:25:19.450 "superblock": false, 00:25:19.450 "num_base_bdevs": 4, 00:25:19.450 "num_base_bdevs_discovered": 3, 00:25:19.450 "num_base_bdevs_operational": 3, 00:25:19.450 "base_bdevs_list": [ 00:25:19.450 { 00:25:19.450 "name": "spare", 00:25:19.450 "uuid": "261b8ec8-52b9-5872-a3c5-069593a55ddc", 00:25:19.450 "is_configured": true, 00:25:19.450 "data_offset": 0, 00:25:19.450 "data_size": 65536 00:25:19.450 }, 00:25:19.450 { 00:25:19.450 "name": null, 00:25:19.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.450 "is_configured": false, 00:25:19.450 "data_offset": 0, 00:25:19.450 "data_size": 65536 00:25:19.450 }, 00:25:19.450 { 00:25:19.450 "name": "BaseBdev3", 00:25:19.450 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:19.450 "is_configured": true, 00:25:19.450 "data_offset": 0, 00:25:19.450 "data_size": 65536 00:25:19.450 }, 00:25:19.450 { 00:25:19.450 "name": "BaseBdev4", 00:25:19.450 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:19.450 "is_configured": true, 00:25:19.450 "data_offset": 0, 00:25:19.450 "data_size": 65536 00:25:19.450 } 00:25:19.450 ] 00:25:19.450 }' 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.450 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.708 "name": "raid_bdev1", 00:25:19.708 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:19.708 "strip_size_kb": 0, 00:25:19.708 "state": "online", 00:25:19.708 "raid_level": "raid1", 00:25:19.708 "superblock": false, 00:25:19.708 "num_base_bdevs": 4, 00:25:19.708 "num_base_bdevs_discovered": 3, 00:25:19.708 "num_base_bdevs_operational": 3, 00:25:19.708 "base_bdevs_list": [ 00:25:19.708 { 00:25:19.708 "name": "spare", 00:25:19.708 "uuid": "261b8ec8-52b9-5872-a3c5-069593a55ddc", 00:25:19.708 "is_configured": true, 00:25:19.708 "data_offset": 0, 00:25:19.708 "data_size": 65536 00:25:19.708 }, 00:25:19.708 { 00:25:19.708 "name": null, 00:25:19.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.708 "is_configured": false, 00:25:19.708 "data_offset": 0, 00:25:19.708 "data_size": 65536 00:25:19.708 }, 00:25:19.708 { 00:25:19.708 "name": "BaseBdev3", 00:25:19.708 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:19.708 "is_configured": true, 00:25:19.708 "data_offset": 0, 00:25:19.708 "data_size": 65536 00:25:19.708 }, 00:25:19.708 { 00:25:19.708 "name": "BaseBdev4", 00:25:19.708 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:19.708 "is_configured": true, 00:25:19.708 "data_offset": 0, 00:25:19.708 "data_size": 65536 00:25:19.708 } 00:25:19.708 ] 00:25:19.708 }' 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.708 15:19:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.965 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.965 "name": "raid_bdev1", 00:25:19.965 "uuid": "44e0c574-e026-4b87-8d95-120bccfb97fc", 00:25:19.965 "strip_size_kb": 0, 00:25:19.965 "state": "online", 00:25:19.965 "raid_level": "raid1", 00:25:19.965 "superblock": false, 00:25:19.965 "num_base_bdevs": 4, 00:25:19.965 "num_base_bdevs_discovered": 3, 00:25:19.965 "num_base_bdevs_operational": 3, 00:25:19.965 "base_bdevs_list": [ 00:25:19.965 { 00:25:19.965 "name": "spare", 00:25:19.965 "uuid": "261b8ec8-52b9-5872-a3c5-069593a55ddc", 00:25:19.965 "is_configured": true, 00:25:19.965 "data_offset": 0, 00:25:19.965 "data_size": 65536 00:25:19.965 }, 00:25:19.965 { 00:25:19.965 "name": null, 00:25:19.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.965 "is_configured": false, 00:25:19.965 "data_offset": 0, 00:25:19.965 "data_size": 65536 00:25:19.965 }, 00:25:19.965 { 00:25:19.965 "name": "BaseBdev3", 00:25:19.965 "uuid": "f02c44e3-7498-5ad1-97f1-614cc9b81f32", 00:25:19.965 "is_configured": true, 00:25:19.965 "data_offset": 0, 00:25:19.965 "data_size": 65536 00:25:19.965 }, 00:25:19.965 { 00:25:19.965 "name": "BaseBdev4", 00:25:19.965 "uuid": "e253e567-e010-5044-9058-39bc0ef048b7", 00:25:19.965 "is_configured": true, 00:25:19.965 "data_offset": 0, 00:25:19.965 "data_size": 65536 00:25:19.965 } 00:25:19.965 ] 00:25:19.965 }' 00:25:19.965 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.965 15:19:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.222 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:20.481 [2024-07-23 15:19:15.678271] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.481 [2024-07-23 15:19:15.678311] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.481 [2024-07-23 15:19:15.678404] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.481 [2024-07-23 15:19:15.678481] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.481 [2024-07-23 15:19:15.678495] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:20.481 15:19:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:20.739 /dev/nbd0 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:20.739 1+0 records in 00:25:20.739 1+0 records out 00:25:20.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016742 s, 24.5 MB/s 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:20.739 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:21.053 /dev/nbd1 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:21.053 1+0 records in 00:25:21.053 1+0 records out 00:25:21.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299861 s, 13.7 MB/s 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:21.053 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:21.311 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 109879 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 109879 ']' 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 109879 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109879 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:21.568 killing process with pid 109879 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109879' 00:25:21.568 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 109879 00:25:21.568 Received shutdown signal, test time was about 60.000000 seconds 00:25:21.568 00:25:21.568 Latency(us) 00:25:21.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.568 =================================================================================================================== 00:25:21.569 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:21.569 [2024-07-23 15:19:16.899834] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:21.569 15:19:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 109879 00:25:21.569 [2024-07-23 15:19:16.951445] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:25:21.826 00:25:21.826 real 0m19.819s 00:25:21.826 user 0m25.206s 00:25:21.826 sys 0m4.399s 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 ************************************ 00:25:21.826 END TEST raid_rebuild_test 00:25:21.826 ************************************ 00:25:21.826 15:19:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:21.826 15:19:17 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:25:21.826 15:19:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:21.826 15:19:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.826 15:19:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 ************************************ 00:25:21.826 START TEST raid_rebuild_test_sb 00:25:21.826 ************************************ 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:21.826 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.827 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:25:21.827 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:21.827 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:21.827 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:21.827 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:21.827 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:21.827 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=110367 00:25:22.084 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 110367 /var/tmp/spdk-raid.sock 00:25:22.085 15:19:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:22.085 15:19:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 110367 ']' 00:25:22.085 15:19:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:22.085 15:19:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:22.085 15:19:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:22.085 15:19:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.085 15:19:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.085 [2024-07-23 15:19:17.310876] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:25:22.085 [2024-07-23 15:19:17.311033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110367 ] 00:25:22.085 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:22.085 Zero copy mechanism will not be used. 00:25:22.085 [2024-07-23 15:19:17.451710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.085 [2024-07-23 15:19:17.497499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.343 [2024-07-23 15:19:17.542482] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:22.909 15:19:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.909 15:19:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:25:22.909 15:19:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:22.909 15:19:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:23.167 BaseBdev1_malloc 00:25:23.167 15:19:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:23.424 [2024-07-23 15:19:18.702181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:23.424 [2024-07-23 15:19:18.702281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.424 [2024-07-23 15:19:18.702316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:25:23.424 [2024-07-23 15:19:18.702337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.424 [2024-07-23 15:19:18.704867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.424 [2024-07-23 15:19:18.704912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:23.424 BaseBdev1 00:25:23.424 15:19:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:23.424 15:19:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:23.682 BaseBdev2_malloc 00:25:23.682 15:19:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:23.940 [2024-07-23 15:19:19.124016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:23.940 [2024-07-23 15:19:19.124110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.940 [2024-07-23 15:19:19.124141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:25:23.940 [2024-07-23 15:19:19.124153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.940 [2024-07-23 15:19:19.126620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.940 [2024-07-23 15:19:19.126664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:23.940 BaseBdev2 00:25:23.940 15:19:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:23.940 15:19:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:23.940 BaseBdev3_malloc 00:25:23.940 15:19:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:24.197 [2024-07-23 15:19:19.480863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:24.197 [2024-07-23 15:19:19.480937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.197 [2024-07-23 15:19:19.480969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:25:24.197 [2024-07-23 15:19:19.480982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.197 [2024-07-23 15:19:19.483456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.197 [2024-07-23 15:19:19.483500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:24.197 BaseBdev3 00:25:24.197 15:19:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:24.197 15:19:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:24.455 BaseBdev4_malloc 00:25:24.455 15:19:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:24.713 [2024-07-23 15:19:19.894477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:24.713 [2024-07-23 15:19:19.894565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.713 [2024-07-23 15:19:19.894598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:25:24.713 [2024-07-23 15:19:19.894611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.713 [2024-07-23 15:19:19.897335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.713 [2024-07-23 15:19:19.897387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:24.713 BaseBdev4 00:25:24.713 15:19:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:24.713 spare_malloc 00:25:24.713 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:24.970 spare_delay 00:25:24.970 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:25.228 [2024-07-23 15:19:20.420011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:25.228 [2024-07-23 15:19:20.420111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.228 [2024-07-23 15:19:20.420147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:25:25.228 [2024-07-23 15:19:20.420160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.228 [2024-07-23 15:19:20.422707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.228 [2024-07-23 15:19:20.422752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:25.228 spare 00:25:25.228 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:25.228 [2024-07-23 15:19:20.656165] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:25.228 [2024-07-23 15:19:20.658390] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:25.228 [2024-07-23 15:19:20.658463] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:25.228 [2024-07-23 15:19:20.658507] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:25.228 [2024-07-23 15:19:20.658707] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:25:25.228 [2024-07-23 15:19:20.658719] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:25.228 [2024-07-23 15:19:20.658855] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:25:25.487 [2024-07-23 15:19:20.659206] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:25:25.487 [2024-07-23 15:19:20.659236] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:25:25.487 [2024-07-23 15:19:20.659366] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.487 "name": "raid_bdev1", 00:25:25.487 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:25.487 "strip_size_kb": 0, 00:25:25.487 "state": "online", 00:25:25.487 "raid_level": "raid1", 00:25:25.487 "superblock": true, 00:25:25.487 "num_base_bdevs": 4, 00:25:25.487 "num_base_bdevs_discovered": 4, 00:25:25.487 "num_base_bdevs_operational": 4, 00:25:25.487 "base_bdevs_list": [ 00:25:25.487 { 00:25:25.487 "name": "BaseBdev1", 00:25:25.487 "uuid": "8008f802-945a-50a9-8fd4-3be023278958", 00:25:25.487 "is_configured": true, 00:25:25.487 "data_offset": 2048, 00:25:25.487 "data_size": 63488 00:25:25.487 }, 00:25:25.487 { 00:25:25.487 "name": "BaseBdev2", 00:25:25.487 "uuid": "fd0936b5-564e-5fed-96fa-1728e8a80d14", 00:25:25.487 "is_configured": true, 00:25:25.487 "data_offset": 2048, 00:25:25.487 "data_size": 63488 00:25:25.487 }, 00:25:25.487 { 00:25:25.487 "name": "BaseBdev3", 00:25:25.487 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:25.487 "is_configured": true, 00:25:25.487 "data_offset": 2048, 00:25:25.487 "data_size": 63488 00:25:25.487 }, 00:25:25.487 { 00:25:25.487 "name": "BaseBdev4", 00:25:25.487 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:25.487 "is_configured": true, 00:25:25.487 "data_offset": 2048, 00:25:25.487 "data_size": 63488 00:25:25.487 } 00:25:25.487 ] 00:25:25.487 }' 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.487 15:19:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.053 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:26.053 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:26.053 [2024-07-23 15:19:21.352578] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:26.053 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:25:26.053 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.053 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:26.312 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:26.312 [2024-07-23 15:19:21.720390] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:25:26.312 /dev/nbd0 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:26.570 1+0 records in 00:25:26.570 1+0 records out 00:25:26.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214598 s, 19.1 MB/s 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:25:26.570 15:19:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:25:33.133 63488+0 records in 00:25:33.133 63488+0 records out 00:25:33.133 32505856 bytes (33 MB, 31 MiB) copied, 6.25643 s, 5.2 MB/s 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:33.133 [2024-07-23 15:19:28.278029] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:33.133 [2024-07-23 15:19:28.490222] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.133 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.392 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.392 "name": "raid_bdev1", 00:25:33.392 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:33.392 "strip_size_kb": 0, 00:25:33.392 "state": "online", 00:25:33.392 "raid_level": "raid1", 00:25:33.392 "superblock": true, 00:25:33.392 "num_base_bdevs": 4, 00:25:33.392 "num_base_bdevs_discovered": 3, 00:25:33.392 "num_base_bdevs_operational": 3, 00:25:33.392 "base_bdevs_list": [ 00:25:33.392 { 00:25:33.392 "name": null, 00:25:33.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.392 "is_configured": false, 00:25:33.392 "data_offset": 2048, 00:25:33.392 "data_size": 63488 00:25:33.392 }, 00:25:33.392 { 00:25:33.392 "name": "BaseBdev2", 00:25:33.392 "uuid": "fd0936b5-564e-5fed-96fa-1728e8a80d14", 00:25:33.392 "is_configured": true, 00:25:33.392 "data_offset": 2048, 00:25:33.392 "data_size": 63488 00:25:33.392 }, 00:25:33.392 { 00:25:33.392 "name": "BaseBdev3", 00:25:33.392 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:33.392 "is_configured": true, 00:25:33.392 "data_offset": 2048, 00:25:33.392 "data_size": 63488 00:25:33.392 }, 00:25:33.392 { 00:25:33.392 "name": "BaseBdev4", 00:25:33.392 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:33.393 "is_configured": true, 00:25:33.393 "data_offset": 2048, 00:25:33.393 "data_size": 63488 00:25:33.393 } 00:25:33.393 ] 00:25:33.393 }' 00:25:33.393 15:19:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.393 15:19:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.960 15:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:33.961 [2024-07-23 15:19:29.342395] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:33.961 [2024-07-23 15:19:29.345992] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000c8ff70 00:25:33.961 [2024-07-23 15:19:29.348181] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:33.961 15:19:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:35.337 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.337 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:35.337 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:35.337 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:35.337 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:35.337 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.338 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.338 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:35.338 "name": "raid_bdev1", 00:25:35.338 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:35.338 "strip_size_kb": 0, 00:25:35.338 "state": "online", 00:25:35.338 "raid_level": "raid1", 00:25:35.338 "superblock": true, 00:25:35.338 "num_base_bdevs": 4, 00:25:35.338 "num_base_bdevs_discovered": 4, 00:25:35.338 "num_base_bdevs_operational": 4, 00:25:35.338 "process": { 00:25:35.338 "type": "rebuild", 00:25:35.338 "target": "spare", 00:25:35.338 "progress": { 00:25:35.338 "blocks": 24576, 00:25:35.338 "percent": 38 00:25:35.338 } 00:25:35.338 }, 00:25:35.338 "base_bdevs_list": [ 00:25:35.338 { 00:25:35.338 "name": "spare", 00:25:35.338 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:35.338 "is_configured": true, 00:25:35.338 "data_offset": 2048, 00:25:35.338 "data_size": 63488 00:25:35.338 }, 00:25:35.338 { 00:25:35.338 "name": "BaseBdev2", 00:25:35.338 "uuid": "fd0936b5-564e-5fed-96fa-1728e8a80d14", 00:25:35.338 "is_configured": true, 00:25:35.338 "data_offset": 2048, 00:25:35.338 "data_size": 63488 00:25:35.338 }, 00:25:35.338 { 00:25:35.338 "name": "BaseBdev3", 00:25:35.338 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:35.338 "is_configured": true, 00:25:35.338 "data_offset": 2048, 00:25:35.338 "data_size": 63488 00:25:35.338 }, 00:25:35.338 { 00:25:35.338 "name": "BaseBdev4", 00:25:35.338 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:35.338 "is_configured": true, 00:25:35.338 "data_offset": 2048, 00:25:35.338 "data_size": 63488 00:25:35.338 } 00:25:35.338 ] 00:25:35.338 }' 00:25:35.338 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:35.338 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.338 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:35.338 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.338 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:35.596 [2024-07-23 15:19:30.857521] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:35.596 [2024-07-23 15:19:30.858068] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:35.596 [2024-07-23 15:19:30.858128] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.596 [2024-07-23 15:19:30.858150] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:35.596 [2024-07-23 15:19:30.858159] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.596 15:19:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.855 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.855 "name": "raid_bdev1", 00:25:35.855 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:35.855 "strip_size_kb": 0, 00:25:35.855 "state": "online", 00:25:35.855 "raid_level": "raid1", 00:25:35.855 "superblock": true, 00:25:35.855 "num_base_bdevs": 4, 00:25:35.856 "num_base_bdevs_discovered": 3, 00:25:35.856 "num_base_bdevs_operational": 3, 00:25:35.856 "base_bdevs_list": [ 00:25:35.856 { 00:25:35.856 "name": null, 00:25:35.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.856 "is_configured": false, 00:25:35.856 "data_offset": 2048, 00:25:35.856 "data_size": 63488 00:25:35.856 }, 00:25:35.856 { 00:25:35.856 "name": "BaseBdev2", 00:25:35.856 "uuid": "fd0936b5-564e-5fed-96fa-1728e8a80d14", 00:25:35.856 "is_configured": true, 00:25:35.856 "data_offset": 2048, 00:25:35.856 "data_size": 63488 00:25:35.856 }, 00:25:35.856 { 00:25:35.856 "name": "BaseBdev3", 00:25:35.856 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:35.856 "is_configured": true, 00:25:35.856 "data_offset": 2048, 00:25:35.856 "data_size": 63488 00:25:35.856 }, 00:25:35.856 { 00:25:35.856 "name": "BaseBdev4", 00:25:35.856 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:35.856 "is_configured": true, 00:25:35.856 "data_offset": 2048, 00:25:35.856 "data_size": 63488 00:25:35.856 } 00:25:35.856 ] 00:25:35.856 }' 00:25:35.856 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.856 15:19:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.115 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:36.115 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:36.115 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:36.115 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:36.115 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:36.115 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.115 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.374 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.374 "name": "raid_bdev1", 00:25:36.374 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:36.374 "strip_size_kb": 0, 00:25:36.374 "state": "online", 00:25:36.374 "raid_level": "raid1", 00:25:36.374 "superblock": true, 00:25:36.374 "num_base_bdevs": 4, 00:25:36.374 "num_base_bdevs_discovered": 3, 00:25:36.374 "num_base_bdevs_operational": 3, 00:25:36.374 "base_bdevs_list": [ 00:25:36.374 { 00:25:36.374 "name": null, 00:25:36.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.374 "is_configured": false, 00:25:36.374 "data_offset": 2048, 00:25:36.374 "data_size": 63488 00:25:36.374 }, 00:25:36.374 { 00:25:36.374 "name": "BaseBdev2", 00:25:36.374 "uuid": "fd0936b5-564e-5fed-96fa-1728e8a80d14", 00:25:36.374 "is_configured": true, 00:25:36.374 "data_offset": 2048, 00:25:36.374 "data_size": 63488 00:25:36.374 }, 00:25:36.374 { 00:25:36.374 "name": "BaseBdev3", 00:25:36.374 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:36.374 "is_configured": true, 00:25:36.374 "data_offset": 2048, 00:25:36.374 "data_size": 63488 00:25:36.374 }, 00:25:36.374 { 00:25:36.374 "name": "BaseBdev4", 00:25:36.374 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:36.374 "is_configured": true, 00:25:36.374 "data_offset": 2048, 00:25:36.374 "data_size": 63488 00:25:36.374 } 00:25:36.374 ] 00:25:36.374 }' 00:25:36.374 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:36.374 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:36.374 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:36.374 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:36.374 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:36.632 [2024-07-23 15:19:31.818710] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:36.632 [2024-07-23 15:19:31.822237] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000c3e0e0 00:25:36.632 [2024-07-23 15:19:31.824409] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:36.632 15:19:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:37.569 15:19:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:37.569 15:19:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:37.569 15:19:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:37.569 15:19:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:37.569 15:19:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:37.569 15:19:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.569 15:19:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:37.829 "name": "raid_bdev1", 00:25:37.829 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:37.829 "strip_size_kb": 0, 00:25:37.829 "state": "online", 00:25:37.829 "raid_level": "raid1", 00:25:37.829 "superblock": true, 00:25:37.829 "num_base_bdevs": 4, 00:25:37.829 "num_base_bdevs_discovered": 4, 00:25:37.829 "num_base_bdevs_operational": 4, 00:25:37.829 "process": { 00:25:37.829 "type": "rebuild", 00:25:37.829 "target": "spare", 00:25:37.829 "progress": { 00:25:37.829 "blocks": 24576, 00:25:37.829 "percent": 38 00:25:37.829 } 00:25:37.829 }, 00:25:37.829 "base_bdevs_list": [ 00:25:37.829 { 00:25:37.829 "name": "spare", 00:25:37.829 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:37.829 "is_configured": true, 00:25:37.829 "data_offset": 2048, 00:25:37.829 "data_size": 63488 00:25:37.829 }, 00:25:37.829 { 00:25:37.829 "name": "BaseBdev2", 00:25:37.829 "uuid": "fd0936b5-564e-5fed-96fa-1728e8a80d14", 00:25:37.829 "is_configured": true, 00:25:37.829 "data_offset": 2048, 00:25:37.829 "data_size": 63488 00:25:37.829 }, 00:25:37.829 { 00:25:37.829 "name": "BaseBdev3", 00:25:37.829 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:37.829 "is_configured": true, 00:25:37.829 "data_offset": 2048, 00:25:37.829 "data_size": 63488 00:25:37.829 }, 00:25:37.829 { 00:25:37.829 "name": "BaseBdev4", 00:25:37.829 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:37.829 "is_configured": true, 00:25:37.829 "data_offset": 2048, 00:25:37.829 "data_size": 63488 00:25:37.829 } 00:25:37.829 ] 00:25:37.829 }' 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:25:37.829 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:25:37.829 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:38.088 [2024-07-23 15:19:33.297460] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:38.088 [2024-07-23 15:19:33.433134] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000c3e0e0 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.088 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.347 "name": "raid_bdev1", 00:25:38.347 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:38.347 "strip_size_kb": 0, 00:25:38.347 "state": "online", 00:25:38.347 "raid_level": "raid1", 00:25:38.347 "superblock": true, 00:25:38.347 "num_base_bdevs": 4, 00:25:38.347 "num_base_bdevs_discovered": 3, 00:25:38.347 "num_base_bdevs_operational": 3, 00:25:38.347 "process": { 00:25:38.347 "type": "rebuild", 00:25:38.347 "target": "spare", 00:25:38.347 "progress": { 00:25:38.347 "blocks": 32768, 00:25:38.347 "percent": 51 00:25:38.347 } 00:25:38.347 }, 00:25:38.347 "base_bdevs_list": [ 00:25:38.347 { 00:25:38.347 "name": "spare", 00:25:38.347 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:38.347 "is_configured": true, 00:25:38.347 "data_offset": 2048, 00:25:38.347 "data_size": 63488 00:25:38.347 }, 00:25:38.347 { 00:25:38.347 "name": null, 00:25:38.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.347 "is_configured": false, 00:25:38.347 "data_offset": 2048, 00:25:38.347 "data_size": 63488 00:25:38.347 }, 00:25:38.347 { 00:25:38.347 "name": "BaseBdev3", 00:25:38.347 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:38.347 "is_configured": true, 00:25:38.347 "data_offset": 2048, 00:25:38.347 "data_size": 63488 00:25:38.347 }, 00:25:38.347 { 00:25:38.347 "name": "BaseBdev4", 00:25:38.347 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:38.347 "is_configured": true, 00:25:38.347 "data_offset": 2048, 00:25:38.347 "data_size": 63488 00:25:38.347 } 00:25:38.347 ] 00:25:38.347 }' 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=693 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.347 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.605 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.605 "name": "raid_bdev1", 00:25:38.605 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:38.605 "strip_size_kb": 0, 00:25:38.605 "state": "online", 00:25:38.605 "raid_level": "raid1", 00:25:38.605 "superblock": true, 00:25:38.605 "num_base_bdevs": 4, 00:25:38.605 "num_base_bdevs_discovered": 3, 00:25:38.605 "num_base_bdevs_operational": 3, 00:25:38.605 "process": { 00:25:38.605 "type": "rebuild", 00:25:38.606 "target": "spare", 00:25:38.606 "progress": { 00:25:38.606 "blocks": 38912, 00:25:38.606 "percent": 61 00:25:38.606 } 00:25:38.606 }, 00:25:38.606 "base_bdevs_list": [ 00:25:38.606 { 00:25:38.606 "name": "spare", 00:25:38.606 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:38.606 "is_configured": true, 00:25:38.606 "data_offset": 2048, 00:25:38.606 "data_size": 63488 00:25:38.606 }, 00:25:38.606 { 00:25:38.606 "name": null, 00:25:38.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.606 "is_configured": false, 00:25:38.606 "data_offset": 2048, 00:25:38.606 "data_size": 63488 00:25:38.606 }, 00:25:38.606 { 00:25:38.606 "name": "BaseBdev3", 00:25:38.606 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:38.606 "is_configured": true, 00:25:38.606 "data_offset": 2048, 00:25:38.606 "data_size": 63488 00:25:38.606 }, 00:25:38.606 { 00:25:38.606 "name": "BaseBdev4", 00:25:38.606 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:38.606 "is_configured": true, 00:25:38.606 "data_offset": 2048, 00:25:38.606 "data_size": 63488 00:25:38.606 } 00:25:38.606 ] 00:25:38.606 }' 00:25:38.606 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:38.606 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.606 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:38.606 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.606 15:19:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:25:39.540 15:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:25:39.540 15:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.540 15:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:39.540 15:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:39.540 15:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:39.540 15:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:39.540 15:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.540 15:19:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.799 [2024-07-23 15:19:35.043548] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:39.799 [2024-07-23 15:19:35.043651] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:39.799 [2024-07-23 15:19:35.043798] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.799 "name": "raid_bdev1", 00:25:39.799 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:39.799 "strip_size_kb": 0, 00:25:39.799 "state": "online", 00:25:39.799 "raid_level": "raid1", 00:25:39.799 "superblock": true, 00:25:39.799 "num_base_bdevs": 4, 00:25:39.799 "num_base_bdevs_discovered": 3, 00:25:39.799 "num_base_bdevs_operational": 3, 00:25:39.799 "base_bdevs_list": [ 00:25:39.799 { 00:25:39.799 "name": "spare", 00:25:39.799 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:39.799 "is_configured": true, 00:25:39.799 "data_offset": 2048, 00:25:39.799 "data_size": 63488 00:25:39.799 }, 00:25:39.799 { 00:25:39.799 "name": null, 00:25:39.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.799 "is_configured": false, 00:25:39.799 "data_offset": 2048, 00:25:39.799 "data_size": 63488 00:25:39.799 }, 00:25:39.799 { 00:25:39.799 "name": "BaseBdev3", 00:25:39.799 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:39.799 "is_configured": true, 00:25:39.799 "data_offset": 2048, 00:25:39.799 "data_size": 63488 00:25:39.799 }, 00:25:39.799 { 00:25:39.799 "name": "BaseBdev4", 00:25:39.799 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:39.799 "is_configured": true, 00:25:39.799 "data_offset": 2048, 00:25:39.799 "data_size": 63488 00:25:39.799 } 00:25:39.799 ] 00:25:39.799 }' 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.799 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:40.058 "name": "raid_bdev1", 00:25:40.058 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:40.058 "strip_size_kb": 0, 00:25:40.058 "state": "online", 00:25:40.058 "raid_level": "raid1", 00:25:40.058 "superblock": true, 00:25:40.058 "num_base_bdevs": 4, 00:25:40.058 "num_base_bdevs_discovered": 3, 00:25:40.058 "num_base_bdevs_operational": 3, 00:25:40.058 "base_bdevs_list": [ 00:25:40.058 { 00:25:40.058 "name": "spare", 00:25:40.058 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:40.058 "is_configured": true, 00:25:40.058 "data_offset": 2048, 00:25:40.058 "data_size": 63488 00:25:40.058 }, 00:25:40.058 { 00:25:40.058 "name": null, 00:25:40.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.058 "is_configured": false, 00:25:40.058 "data_offset": 2048, 00:25:40.058 "data_size": 63488 00:25:40.058 }, 00:25:40.058 { 00:25:40.058 "name": "BaseBdev3", 00:25:40.058 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:40.058 "is_configured": true, 00:25:40.058 "data_offset": 2048, 00:25:40.058 "data_size": 63488 00:25:40.058 }, 00:25:40.058 { 00:25:40.058 "name": "BaseBdev4", 00:25:40.058 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:40.058 "is_configured": true, 00:25:40.058 "data_offset": 2048, 00:25:40.058 "data_size": 63488 00:25:40.058 } 00:25:40.058 ] 00:25:40.058 }' 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.058 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.317 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.317 "name": "raid_bdev1", 00:25:40.317 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:40.317 "strip_size_kb": 0, 00:25:40.317 "state": "online", 00:25:40.317 "raid_level": "raid1", 00:25:40.317 "superblock": true, 00:25:40.317 "num_base_bdevs": 4, 00:25:40.317 "num_base_bdevs_discovered": 3, 00:25:40.317 "num_base_bdevs_operational": 3, 00:25:40.317 "base_bdevs_list": [ 00:25:40.317 { 00:25:40.317 "name": "spare", 00:25:40.317 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:40.317 "is_configured": true, 00:25:40.317 "data_offset": 2048, 00:25:40.317 "data_size": 63488 00:25:40.317 }, 00:25:40.317 { 00:25:40.317 "name": null, 00:25:40.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.317 "is_configured": false, 00:25:40.317 "data_offset": 2048, 00:25:40.317 "data_size": 63488 00:25:40.317 }, 00:25:40.317 { 00:25:40.317 "name": "BaseBdev3", 00:25:40.317 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:40.317 "is_configured": true, 00:25:40.317 "data_offset": 2048, 00:25:40.317 "data_size": 63488 00:25:40.317 }, 00:25:40.317 { 00:25:40.317 "name": "BaseBdev4", 00:25:40.317 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:40.317 "is_configured": true, 00:25:40.317 "data_offset": 2048, 00:25:40.317 "data_size": 63488 00:25:40.317 } 00:25:40.317 ] 00:25:40.317 }' 00:25:40.317 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.317 15:19:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.576 15:19:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:40.835 [2024-07-23 15:19:36.192549] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:40.835 [2024-07-23 15:19:36.192598] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:40.835 [2024-07-23 15:19:36.192705] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:40.835 [2024-07-23 15:19:36.192801] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:40.835 [2024-07-23 15:19:36.192818] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:25:40.835 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.835 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.094 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:41.353 /dev/nbd0 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:41.353 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.353 1+0 records in 00:25:41.353 1+0 records out 00:25:41.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582206 s, 7.0 MB/s 00:25:41.354 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.354 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:25:41.354 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.354 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:41.354 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:25:41.354 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:41.354 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.354 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:41.613 /dev/nbd1 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.613 1+0 records in 00:25:41.613 1+0 records out 00:25:41.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347667 s, 11.8 MB/s 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.613 15:19:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:41.613 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:41.613 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:41.613 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:41.613 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:41.613 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:41.613 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:41.613 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:41.871 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:42.129 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:25:42.130 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:42.387 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:42.646 [2024-07-23 15:19:37.847847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:42.646 [2024-07-23 15:19:37.847929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.646 [2024-07-23 15:19:37.847960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:25:42.646 [2024-07-23 15:19:37.847975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.646 [2024-07-23 15:19:37.850532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.646 [2024-07-23 15:19:37.850583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:42.646 [2024-07-23 15:19:37.850669] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:42.646 [2024-07-23 15:19:37.850730] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:42.646 [2024-07-23 15:19:37.851066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:42.646 [2024-07-23 15:19:37.851296] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:42.646 spare 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.646 15:19:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.646 [2024-07-23 15:19:37.951483] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:25:42.646 [2024-07-23 15:19:37.951724] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:42.646 [2024-07-23 15:19:37.951920] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cae6f0 00:25:42.646 [2024-07-23 15:19:37.952399] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:25:42.646 [2024-07-23 15:19:37.952519] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:25:42.646 [2024-07-23 15:19:37.952773] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.646 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.646 "name": "raid_bdev1", 00:25:42.646 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:42.646 "strip_size_kb": 0, 00:25:42.646 "state": "online", 00:25:42.646 "raid_level": "raid1", 00:25:42.646 "superblock": true, 00:25:42.646 "num_base_bdevs": 4, 00:25:42.646 "num_base_bdevs_discovered": 3, 00:25:42.646 "num_base_bdevs_operational": 3, 00:25:42.646 "base_bdevs_list": [ 00:25:42.646 { 00:25:42.646 "name": "spare", 00:25:42.646 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:42.646 "is_configured": true, 00:25:42.646 "data_offset": 2048, 00:25:42.646 "data_size": 63488 00:25:42.646 }, 00:25:42.646 { 00:25:42.646 "name": null, 00:25:42.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.646 "is_configured": false, 00:25:42.646 "data_offset": 2048, 00:25:42.646 "data_size": 63488 00:25:42.646 }, 00:25:42.646 { 00:25:42.646 "name": "BaseBdev3", 00:25:42.646 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:42.646 "is_configured": true, 00:25:42.646 "data_offset": 2048, 00:25:42.646 "data_size": 63488 00:25:42.646 }, 00:25:42.646 { 00:25:42.646 "name": "BaseBdev4", 00:25:42.646 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:42.646 "is_configured": true, 00:25:42.646 "data_offset": 2048, 00:25:42.646 "data_size": 63488 00:25:42.646 } 00:25:42.646 ] 00:25:42.646 }' 00:25:42.646 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.646 15:19:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.905 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:42.905 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:42.905 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:42.905 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:42.905 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:42.905 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.905 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.172 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:43.172 "name": "raid_bdev1", 00:25:43.172 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:43.172 "strip_size_kb": 0, 00:25:43.172 "state": "online", 00:25:43.172 "raid_level": "raid1", 00:25:43.172 "superblock": true, 00:25:43.172 "num_base_bdevs": 4, 00:25:43.172 "num_base_bdevs_discovered": 3, 00:25:43.172 "num_base_bdevs_operational": 3, 00:25:43.172 "base_bdevs_list": [ 00:25:43.172 { 00:25:43.172 "name": "spare", 00:25:43.172 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:43.172 "is_configured": true, 00:25:43.172 "data_offset": 2048, 00:25:43.172 "data_size": 63488 00:25:43.172 }, 00:25:43.172 { 00:25:43.172 "name": null, 00:25:43.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.172 "is_configured": false, 00:25:43.172 "data_offset": 2048, 00:25:43.172 "data_size": 63488 00:25:43.172 }, 00:25:43.172 { 00:25:43.172 "name": "BaseBdev3", 00:25:43.172 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:43.172 "is_configured": true, 00:25:43.172 "data_offset": 2048, 00:25:43.172 "data_size": 63488 00:25:43.172 }, 00:25:43.172 { 00:25:43.172 "name": "BaseBdev4", 00:25:43.172 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:43.172 "is_configured": true, 00:25:43.172 "data_offset": 2048, 00:25:43.172 "data_size": 63488 00:25:43.172 } 00:25:43.172 ] 00:25:43.172 }' 00:25:43.172 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:43.172 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:43.172 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:43.172 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:43.172 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.172 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:43.433 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:25:43.433 15:19:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:43.695 [2024-07-23 15:19:39.001102] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:43.695 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:43.695 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.696 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.986 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.986 "name": "raid_bdev1", 00:25:43.986 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:43.986 "strip_size_kb": 0, 00:25:43.986 "state": "online", 00:25:43.986 "raid_level": "raid1", 00:25:43.986 "superblock": true, 00:25:43.986 "num_base_bdevs": 4, 00:25:43.986 "num_base_bdevs_discovered": 2, 00:25:43.986 "num_base_bdevs_operational": 2, 00:25:43.986 "base_bdevs_list": [ 00:25:43.986 { 00:25:43.986 "name": null, 00:25:43.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.986 "is_configured": false, 00:25:43.986 "data_offset": 2048, 00:25:43.986 "data_size": 63488 00:25:43.986 }, 00:25:43.986 { 00:25:43.986 "name": null, 00:25:43.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.986 "is_configured": false, 00:25:43.986 "data_offset": 2048, 00:25:43.986 "data_size": 63488 00:25:43.986 }, 00:25:43.986 { 00:25:43.986 "name": "BaseBdev3", 00:25:43.986 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:43.986 "is_configured": true, 00:25:43.986 "data_offset": 2048, 00:25:43.986 "data_size": 63488 00:25:43.986 }, 00:25:43.986 { 00:25:43.986 "name": "BaseBdev4", 00:25:43.986 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:43.986 "is_configured": true, 00:25:43.986 "data_offset": 2048, 00:25:43.986 "data_size": 63488 00:25:43.986 } 00:25:43.986 ] 00:25:43.986 }' 00:25:43.986 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.986 15:19:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.245 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:44.245 [2024-07-23 15:19:39.657241] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:44.245 [2024-07-23 15:19:39.657595] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:44.245 [2024-07-23 15:19:39.657626] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:44.245 [2024-07-23 15:19:39.657668] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:44.245 [2024-07-23 15:19:39.661207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cae7c0 00:25:44.245 [2024-07-23 15:19:39.663497] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:44.504 15:19:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:25:45.440 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.440 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:45.440 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:45.440 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:45.440 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:45.440 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.440 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.699 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:45.699 "name": "raid_bdev1", 00:25:45.699 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:45.699 "strip_size_kb": 0, 00:25:45.699 "state": "online", 00:25:45.699 "raid_level": "raid1", 00:25:45.699 "superblock": true, 00:25:45.699 "num_base_bdevs": 4, 00:25:45.699 "num_base_bdevs_discovered": 3, 00:25:45.699 "num_base_bdevs_operational": 3, 00:25:45.699 "process": { 00:25:45.699 "type": "rebuild", 00:25:45.699 "target": "spare", 00:25:45.699 "progress": { 00:25:45.699 "blocks": 24576, 00:25:45.699 "percent": 38 00:25:45.699 } 00:25:45.699 }, 00:25:45.699 "base_bdevs_list": [ 00:25:45.699 { 00:25:45.699 "name": "spare", 00:25:45.699 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:45.699 "is_configured": true, 00:25:45.699 "data_offset": 2048, 00:25:45.699 "data_size": 63488 00:25:45.699 }, 00:25:45.699 { 00:25:45.699 "name": null, 00:25:45.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.699 "is_configured": false, 00:25:45.699 "data_offset": 2048, 00:25:45.699 "data_size": 63488 00:25:45.699 }, 00:25:45.699 { 00:25:45.699 "name": "BaseBdev3", 00:25:45.699 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:45.699 "is_configured": true, 00:25:45.699 "data_offset": 2048, 00:25:45.699 "data_size": 63488 00:25:45.699 }, 00:25:45.699 { 00:25:45.699 "name": "BaseBdev4", 00:25:45.699 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:45.699 "is_configured": true, 00:25:45.699 "data_offset": 2048, 00:25:45.699 "data_size": 63488 00:25:45.699 } 00:25:45.699 ] 00:25:45.699 }' 00:25:45.699 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:45.699 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:45.699 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:45.699 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.699 15:19:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:45.958 [2024-07-23 15:19:41.202770] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:45.958 [2024-07-23 15:19:41.273006] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:45.958 [2024-07-23 15:19:41.273355] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.958 [2024-07-23 15:19:41.273381] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:45.958 [2024-07-23 15:19:41.273395] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.958 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.217 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.217 "name": "raid_bdev1", 00:25:46.217 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:46.217 "strip_size_kb": 0, 00:25:46.217 "state": "online", 00:25:46.217 "raid_level": "raid1", 00:25:46.217 "superblock": true, 00:25:46.217 "num_base_bdevs": 4, 00:25:46.217 "num_base_bdevs_discovered": 2, 00:25:46.217 "num_base_bdevs_operational": 2, 00:25:46.217 "base_bdevs_list": [ 00:25:46.217 { 00:25:46.217 "name": null, 00:25:46.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.217 "is_configured": false, 00:25:46.217 "data_offset": 2048, 00:25:46.217 "data_size": 63488 00:25:46.217 }, 00:25:46.217 { 00:25:46.217 "name": null, 00:25:46.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.217 "is_configured": false, 00:25:46.217 "data_offset": 2048, 00:25:46.217 "data_size": 63488 00:25:46.217 }, 00:25:46.217 { 00:25:46.217 "name": "BaseBdev3", 00:25:46.217 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:46.217 "is_configured": true, 00:25:46.217 "data_offset": 2048, 00:25:46.217 "data_size": 63488 00:25:46.217 }, 00:25:46.217 { 00:25:46.217 "name": "BaseBdev4", 00:25:46.217 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:46.217 "is_configured": true, 00:25:46.217 "data_offset": 2048, 00:25:46.217 "data_size": 63488 00:25:46.217 } 00:25:46.217 ] 00:25:46.217 }' 00:25:46.217 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.217 15:19:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.476 15:19:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:46.735 [2024-07-23 15:19:41.990092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:46.735 [2024-07-23 15:19:41.990181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.735 [2024-07-23 15:19:41.990212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:25:46.735 [2024-07-23 15:19:41.990228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.735 [2024-07-23 15:19:41.990675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.735 [2024-07-23 15:19:41.990699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:46.735 [2024-07-23 15:19:41.990783] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:46.735 [2024-07-23 15:19:41.990817] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:46.735 [2024-07-23 15:19:41.990830] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:46.735 [2024-07-23 15:19:41.990860] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:46.735 spare 00:25:46.735 [2024-07-23 15:19:41.994295] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cae890 00:25:46.735 [2024-07-23 15:19:41.996837] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:46.735 15:19:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:25:47.672 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.672 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:47.672 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:47.672 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:47.672 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:47.672 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.672 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.930 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:47.930 "name": "raid_bdev1", 00:25:47.930 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:47.930 "strip_size_kb": 0, 00:25:47.930 "state": "online", 00:25:47.930 "raid_level": "raid1", 00:25:47.930 "superblock": true, 00:25:47.930 "num_base_bdevs": 4, 00:25:47.930 "num_base_bdevs_discovered": 3, 00:25:47.930 "num_base_bdevs_operational": 3, 00:25:47.930 "process": { 00:25:47.930 "type": "rebuild", 00:25:47.930 "target": "spare", 00:25:47.930 "progress": { 00:25:47.930 "blocks": 24576, 00:25:47.930 "percent": 38 00:25:47.930 } 00:25:47.930 }, 00:25:47.930 "base_bdevs_list": [ 00:25:47.930 { 00:25:47.930 "name": "spare", 00:25:47.930 "uuid": "fbf9acbe-2b16-5b8a-91b3-9c007d70362b", 00:25:47.930 "is_configured": true, 00:25:47.930 "data_offset": 2048, 00:25:47.930 "data_size": 63488 00:25:47.930 }, 00:25:47.930 { 00:25:47.930 "name": null, 00:25:47.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.930 "is_configured": false, 00:25:47.930 "data_offset": 2048, 00:25:47.930 "data_size": 63488 00:25:47.930 }, 00:25:47.930 { 00:25:47.930 "name": "BaseBdev3", 00:25:47.930 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:47.930 "is_configured": true, 00:25:47.930 "data_offset": 2048, 00:25:47.930 "data_size": 63488 00:25:47.930 }, 00:25:47.930 { 00:25:47.930 "name": "BaseBdev4", 00:25:47.930 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:47.930 "is_configured": true, 00:25:47.930 "data_offset": 2048, 00:25:47.930 "data_size": 63488 00:25:47.930 } 00:25:47.930 ] 00:25:47.930 }' 00:25:47.930 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:47.930 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.930 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:47.930 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.931 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:48.188 [2024-07-23 15:19:43.451666] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.188 [2024-07-23 15:19:43.505934] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:48.188 [2024-07-23 15:19:43.506024] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.188 [2024-07-23 15:19:43.506045] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.188 [2024-07-23 15:19:43.506054] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.188 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.446 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.446 "name": "raid_bdev1", 00:25:48.446 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:48.446 "strip_size_kb": 0, 00:25:48.446 "state": "online", 00:25:48.446 "raid_level": "raid1", 00:25:48.446 "superblock": true, 00:25:48.446 "num_base_bdevs": 4, 00:25:48.446 "num_base_bdevs_discovered": 2, 00:25:48.446 "num_base_bdevs_operational": 2, 00:25:48.446 "base_bdevs_list": [ 00:25:48.446 { 00:25:48.446 "name": null, 00:25:48.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.446 "is_configured": false, 00:25:48.446 "data_offset": 2048, 00:25:48.446 "data_size": 63488 00:25:48.446 }, 00:25:48.446 { 00:25:48.446 "name": null, 00:25:48.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.446 "is_configured": false, 00:25:48.446 "data_offset": 2048, 00:25:48.446 "data_size": 63488 00:25:48.446 }, 00:25:48.446 { 00:25:48.446 "name": "BaseBdev3", 00:25:48.446 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:48.446 "is_configured": true, 00:25:48.446 "data_offset": 2048, 00:25:48.446 "data_size": 63488 00:25:48.446 }, 00:25:48.446 { 00:25:48.446 "name": "BaseBdev4", 00:25:48.446 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:48.446 "is_configured": true, 00:25:48.446 "data_offset": 2048, 00:25:48.446 "data_size": 63488 00:25:48.446 } 00:25:48.446 ] 00:25:48.446 }' 00:25:48.446 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.446 15:19:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.705 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:48.705 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:48.705 15:19:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:48.705 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:48.705 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:48.705 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.705 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.963 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.963 "name": "raid_bdev1", 00:25:48.963 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:48.963 "strip_size_kb": 0, 00:25:48.963 "state": "online", 00:25:48.963 "raid_level": "raid1", 00:25:48.963 "superblock": true, 00:25:48.963 "num_base_bdevs": 4, 00:25:48.963 "num_base_bdevs_discovered": 2, 00:25:48.963 "num_base_bdevs_operational": 2, 00:25:48.963 "base_bdevs_list": [ 00:25:48.963 { 00:25:48.963 "name": null, 00:25:48.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.963 "is_configured": false, 00:25:48.963 "data_offset": 2048, 00:25:48.963 "data_size": 63488 00:25:48.963 }, 00:25:48.963 { 00:25:48.963 "name": null, 00:25:48.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.963 "is_configured": false, 00:25:48.963 "data_offset": 2048, 00:25:48.963 "data_size": 63488 00:25:48.963 }, 00:25:48.963 { 00:25:48.963 "name": "BaseBdev3", 00:25:48.963 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:48.963 "is_configured": true, 00:25:48.963 "data_offset": 2048, 00:25:48.963 "data_size": 63488 00:25:48.963 }, 00:25:48.963 { 00:25:48.963 "name": "BaseBdev4", 00:25:48.963 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:48.963 "is_configured": true, 00:25:48.963 "data_offset": 2048, 00:25:48.963 "data_size": 63488 00:25:48.963 } 00:25:48.963 ] 00:25:48.963 }' 00:25:48.963 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:48.963 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:48.963 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:48.963 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:48.963 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:49.222 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:49.480 [2024-07-23 15:19:44.762847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:49.480 [2024-07-23 15:19:44.762928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.480 [2024-07-23 15:19:44.762962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:25:49.480 [2024-07-23 15:19:44.762974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.480 [2024-07-23 15:19:44.763408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.480 [2024-07-23 15:19:44.763429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:49.480 [2024-07-23 15:19:44.763505] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:49.480 [2024-07-23 15:19:44.763532] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:49.480 [2024-07-23 15:19:44.763549] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:49.480 BaseBdev1 00:25:49.480 15:19:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.418 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.677 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.677 "name": "raid_bdev1", 00:25:50.677 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:50.677 "strip_size_kb": 0, 00:25:50.677 "state": "online", 00:25:50.677 "raid_level": "raid1", 00:25:50.677 "superblock": true, 00:25:50.677 "num_base_bdevs": 4, 00:25:50.677 "num_base_bdevs_discovered": 2, 00:25:50.677 "num_base_bdevs_operational": 2, 00:25:50.677 "base_bdevs_list": [ 00:25:50.677 { 00:25:50.677 "name": null, 00:25:50.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.677 "is_configured": false, 00:25:50.677 "data_offset": 2048, 00:25:50.677 "data_size": 63488 00:25:50.677 }, 00:25:50.677 { 00:25:50.677 "name": null, 00:25:50.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.677 "is_configured": false, 00:25:50.677 "data_offset": 2048, 00:25:50.677 "data_size": 63488 00:25:50.677 }, 00:25:50.677 { 00:25:50.677 "name": "BaseBdev3", 00:25:50.677 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:50.677 "is_configured": true, 00:25:50.677 "data_offset": 2048, 00:25:50.677 "data_size": 63488 00:25:50.677 }, 00:25:50.677 { 00:25:50.677 "name": "BaseBdev4", 00:25:50.677 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:50.677 "is_configured": true, 00:25:50.677 "data_offset": 2048, 00:25:50.677 "data_size": 63488 00:25:50.677 } 00:25:50.677 ] 00:25:50.677 }' 00:25:50.677 15:19:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.677 15:19:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.935 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:50.935 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.935 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:50.935 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:50.935 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.935 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.935 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:51.194 "name": "raid_bdev1", 00:25:51.194 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:51.194 "strip_size_kb": 0, 00:25:51.194 "state": "online", 00:25:51.194 "raid_level": "raid1", 00:25:51.194 "superblock": true, 00:25:51.194 "num_base_bdevs": 4, 00:25:51.194 "num_base_bdevs_discovered": 2, 00:25:51.194 "num_base_bdevs_operational": 2, 00:25:51.194 "base_bdevs_list": [ 00:25:51.194 { 00:25:51.194 "name": null, 00:25:51.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.194 "is_configured": false, 00:25:51.194 "data_offset": 2048, 00:25:51.194 "data_size": 63488 00:25:51.194 }, 00:25:51.194 { 00:25:51.194 "name": null, 00:25:51.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.194 "is_configured": false, 00:25:51.194 "data_offset": 2048, 00:25:51.194 "data_size": 63488 00:25:51.194 }, 00:25:51.194 { 00:25:51.194 "name": "BaseBdev3", 00:25:51.194 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:51.194 "is_configured": true, 00:25:51.194 "data_offset": 2048, 00:25:51.194 "data_size": 63488 00:25:51.194 }, 00:25:51.194 { 00:25:51.194 "name": "BaseBdev4", 00:25:51.194 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:51.194 "is_configured": true, 00:25:51.194 "data_offset": 2048, 00:25:51.194 "data_size": 63488 00:25:51.194 } 00:25:51.194 ] 00:25:51.194 }' 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:51.194 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:51.453 [2024-07-23 15:19:46.735354] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.453 [2024-07-23 15:19:46.735524] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:51.453 [2024-07-23 15:19:46.735540] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:51.453 request: 00:25:51.453 { 00:25:51.453 "base_bdev": "BaseBdev1", 00:25:51.453 "raid_bdev": "raid_bdev1", 00:25:51.453 "method": "bdev_raid_add_base_bdev", 00:25:51.453 "req_id": 1 00:25:51.453 } 00:25:51.453 Got JSON-RPC error response 00:25:51.453 response: 00:25:51.453 { 00:25:51.453 "code": -22, 00:25:51.453 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:51.453 } 00:25:51.453 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:25:51.453 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:51.453 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:51.453 15:19:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:51.453 15:19:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.387 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.646 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.646 "name": "raid_bdev1", 00:25:52.646 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:52.646 "strip_size_kb": 0, 00:25:52.646 "state": "online", 00:25:52.646 "raid_level": "raid1", 00:25:52.646 "superblock": true, 00:25:52.646 "num_base_bdevs": 4, 00:25:52.646 "num_base_bdevs_discovered": 2, 00:25:52.646 "num_base_bdevs_operational": 2, 00:25:52.646 "base_bdevs_list": [ 00:25:52.646 { 00:25:52.646 "name": null, 00:25:52.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.646 "is_configured": false, 00:25:52.646 "data_offset": 2048, 00:25:52.646 "data_size": 63488 00:25:52.646 }, 00:25:52.646 { 00:25:52.646 "name": null, 00:25:52.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.646 "is_configured": false, 00:25:52.646 "data_offset": 2048, 00:25:52.646 "data_size": 63488 00:25:52.646 }, 00:25:52.646 { 00:25:52.646 "name": "BaseBdev3", 00:25:52.646 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:52.646 "is_configured": true, 00:25:52.646 "data_offset": 2048, 00:25:52.646 "data_size": 63488 00:25:52.646 }, 00:25:52.646 { 00:25:52.646 "name": "BaseBdev4", 00:25:52.646 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:52.646 "is_configured": true, 00:25:52.646 "data_offset": 2048, 00:25:52.646 "data_size": 63488 00:25:52.646 } 00:25:52.646 ] 00:25:52.646 }' 00:25:52.646 15:19:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.646 15:19:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.213 "name": "raid_bdev1", 00:25:53.213 "uuid": "a5ca969d-ec43-4752-ac8d-a880f1a97e5f", 00:25:53.213 "strip_size_kb": 0, 00:25:53.213 "state": "online", 00:25:53.213 "raid_level": "raid1", 00:25:53.213 "superblock": true, 00:25:53.213 "num_base_bdevs": 4, 00:25:53.213 "num_base_bdevs_discovered": 2, 00:25:53.213 "num_base_bdevs_operational": 2, 00:25:53.213 "base_bdevs_list": [ 00:25:53.213 { 00:25:53.213 "name": null, 00:25:53.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.213 "is_configured": false, 00:25:53.213 "data_offset": 2048, 00:25:53.213 "data_size": 63488 00:25:53.213 }, 00:25:53.213 { 00:25:53.213 "name": null, 00:25:53.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.213 "is_configured": false, 00:25:53.213 "data_offset": 2048, 00:25:53.213 "data_size": 63488 00:25:53.213 }, 00:25:53.213 { 00:25:53.213 "name": "BaseBdev3", 00:25:53.213 "uuid": "dfd622f4-a05e-5c14-80bc-5edc302893d8", 00:25:53.213 "is_configured": true, 00:25:53.213 "data_offset": 2048, 00:25:53.213 "data_size": 63488 00:25:53.213 }, 00:25:53.213 { 00:25:53.213 "name": "BaseBdev4", 00:25:53.213 "uuid": "95b50345-5ad3-5dbf-b160-a7ecf88068c0", 00:25:53.213 "is_configured": true, 00:25:53.213 "data_offset": 2048, 00:25:53.213 "data_size": 63488 00:25:53.213 } 00:25:53.213 ] 00:25:53.213 }' 00:25:53.213 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:53.214 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:53.214 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:53.214 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:53.214 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 110367 00:25:53.214 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 110367 ']' 00:25:53.214 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 110367 00:25:53.472 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:25:53.472 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:53.472 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110367 00:25:53.472 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:53.472 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:53.472 killing process with pid 110367 00:25:53.472 Received shutdown signal, test time was about 60.000000 seconds 00:25:53.472 00:25:53.472 Latency(us) 00:25:53.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.472 =================================================================================================================== 00:25:53.472 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:53.472 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110367' 00:25:53.472 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 110367 00:25:53.472 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 110367 00:25:53.472 [2024-07-23 15:19:48.685681] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:53.472 [2024-07-23 15:19:48.685822] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:53.472 [2024-07-23 15:19:48.685903] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:53.472 [2024-07-23 15:19:48.685917] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:25:53.472 [2024-07-23 15:19:48.739222] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:53.731 15:19:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:25:53.731 00:25:53.731 real 0m31.733s 00:25:53.731 user 0m43.199s 00:25:53.731 sys 0m6.347s 00:25:53.731 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.731 15:19:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.731 ************************************ 00:25:53.731 END TEST raid_rebuild_test_sb 00:25:53.731 ************************************ 00:25:53.731 15:19:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:53.731 15:19:49 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:25:53.731 15:19:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:53.731 15:19:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.731 15:19:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:53.731 ************************************ 00:25:53.731 START TEST raid_rebuild_test_io 00:25:53.731 ************************************ 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=111218 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 111218 /var/tmp/spdk-raid.sock 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 111218 ']' 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.731 15:19:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:53.731 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:53.731 Zero copy mechanism will not be used. 00:25:53.731 [2024-07-23 15:19:49.118364] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:25:53.731 [2024-07-23 15:19:49.118522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111218 ] 00:25:53.989 [2024-07-23 15:19:49.259917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.989 [2024-07-23 15:19:49.309582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.989 [2024-07-23 15:19:49.354307] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:54.922 15:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.922 15:19:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:25:54.922 15:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:54.922 15:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:54.922 BaseBdev1_malloc 00:25:54.922 15:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:55.180 [2024-07-23 15:19:50.425874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:55.180 [2024-07-23 15:19:50.425953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.180 [2024-07-23 15:19:50.425988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:25:55.180 [2024-07-23 15:19:50.426025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.180 [2024-07-23 15:19:50.428755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.180 [2024-07-23 15:19:50.428817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:55.180 BaseBdev1 00:25:55.180 15:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:55.180 15:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:55.438 BaseBdev2_malloc 00:25:55.438 15:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:55.711 [2024-07-23 15:19:50.931428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:55.711 [2024-07-23 15:19:50.931511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.711 [2024-07-23 15:19:50.931543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:25:55.711 [2024-07-23 15:19:50.931556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.711 [2024-07-23 15:19:50.934047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.711 [2024-07-23 15:19:50.934086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:55.711 BaseBdev2 00:25:55.711 15:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:55.711 15:19:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:55.979 BaseBdev3_malloc 00:25:55.979 15:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:55.980 [2024-07-23 15:19:51.309422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:55.980 [2024-07-23 15:19:51.309504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.980 [2024-07-23 15:19:51.309535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:25:55.980 [2024-07-23 15:19:51.309555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.980 [2024-07-23 15:19:51.312031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.980 [2024-07-23 15:19:51.312071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:55.980 BaseBdev3 00:25:55.980 15:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:25:55.980 15:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:56.238 BaseBdev4_malloc 00:25:56.238 15:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:56.496 [2024-07-23 15:19:51.679281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:56.496 [2024-07-23 15:19:51.679363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.496 [2024-07-23 15:19:51.679397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:25:56.496 [2024-07-23 15:19:51.679409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.496 [2024-07-23 15:19:51.681906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.496 [2024-07-23 15:19:51.681945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:56.496 BaseBdev4 00:25:56.496 15:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:56.496 spare_malloc 00:25:56.496 15:19:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:56.754 spare_delay 00:25:56.754 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:57.012 [2024-07-23 15:19:52.224847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:57.012 [2024-07-23 15:19:52.224939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:57.012 [2024-07-23 15:19:52.224973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:25:57.012 [2024-07-23 15:19:52.224986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:57.012 [2024-07-23 15:19:52.227571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:57.012 [2024-07-23 15:19:52.227610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:57.012 spare 00:25:57.012 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:57.012 [2024-07-23 15:19:52.400974] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:57.012 [2024-07-23 15:19:52.403374] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.012 [2024-07-23 15:19:52.403448] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:57.012 [2024-07-23 15:19:52.403490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:57.012 [2024-07-23 15:19:52.403590] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:25:57.012 [2024-07-23 15:19:52.403601] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:57.012 [2024-07-23 15:19:52.403740] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:25:57.012 [2024-07-23 15:19:52.404095] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:25:57.013 [2024-07-23 15:19:52.404125] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:25:57.013 [2024-07-23 15:19:52.404284] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.013 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.271 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.271 "name": "raid_bdev1", 00:25:57.271 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:25:57.271 "strip_size_kb": 0, 00:25:57.271 "state": "online", 00:25:57.271 "raid_level": "raid1", 00:25:57.271 "superblock": false, 00:25:57.271 "num_base_bdevs": 4, 00:25:57.271 "num_base_bdevs_discovered": 4, 00:25:57.271 "num_base_bdevs_operational": 4, 00:25:57.271 "base_bdevs_list": [ 00:25:57.271 { 00:25:57.271 "name": "BaseBdev1", 00:25:57.271 "uuid": "feedd4c2-f793-59bf-8b6c-f0a2795bb0c7", 00:25:57.271 "is_configured": true, 00:25:57.271 "data_offset": 0, 00:25:57.271 "data_size": 65536 00:25:57.271 }, 00:25:57.271 { 00:25:57.271 "name": "BaseBdev2", 00:25:57.271 "uuid": "404065ea-9a80-5945-967a-30f6ea372633", 00:25:57.271 "is_configured": true, 00:25:57.271 "data_offset": 0, 00:25:57.271 "data_size": 65536 00:25:57.271 }, 00:25:57.271 { 00:25:57.271 "name": "BaseBdev3", 00:25:57.271 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:25:57.271 "is_configured": true, 00:25:57.271 "data_offset": 0, 00:25:57.271 "data_size": 65536 00:25:57.271 }, 00:25:57.271 { 00:25:57.271 "name": "BaseBdev4", 00:25:57.271 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:25:57.271 "is_configured": true, 00:25:57.271 "data_offset": 0, 00:25:57.271 "data_size": 65536 00:25:57.271 } 00:25:57.271 ] 00:25:57.271 }' 00:25:57.271 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.271 15:19:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:57.530 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:25:57.530 15:19:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:57.788 [2024-07-23 15:19:53.037336] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:57.788 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:25:57.788 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.788 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:58.050 [2024-07-23 15:19:53.351251] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:25:58.050 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:58.050 Zero copy mechanism will not be used. 00:25:58.050 Running I/O for 60 seconds... 00:25:58.050 [2024-07-23 15:19:53.388978] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:58.050 [2024-07-23 15:19:53.400523] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000002460 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.050 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.311 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.311 "name": "raid_bdev1", 00:25:58.311 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:25:58.311 "strip_size_kb": 0, 00:25:58.311 "state": "online", 00:25:58.311 "raid_level": "raid1", 00:25:58.311 "superblock": false, 00:25:58.311 "num_base_bdevs": 4, 00:25:58.311 "num_base_bdevs_discovered": 3, 00:25:58.311 "num_base_bdevs_operational": 3, 00:25:58.311 "base_bdevs_list": [ 00:25:58.311 { 00:25:58.311 "name": null, 00:25:58.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.311 "is_configured": false, 00:25:58.311 "data_offset": 0, 00:25:58.311 "data_size": 65536 00:25:58.311 }, 00:25:58.311 { 00:25:58.311 "name": "BaseBdev2", 00:25:58.311 "uuid": "404065ea-9a80-5945-967a-30f6ea372633", 00:25:58.311 "is_configured": true, 00:25:58.311 "data_offset": 0, 00:25:58.311 "data_size": 65536 00:25:58.311 }, 00:25:58.311 { 00:25:58.311 "name": "BaseBdev3", 00:25:58.311 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:25:58.311 "is_configured": true, 00:25:58.311 "data_offset": 0, 00:25:58.311 "data_size": 65536 00:25:58.311 }, 00:25:58.311 { 00:25:58.311 "name": "BaseBdev4", 00:25:58.311 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:25:58.311 "is_configured": true, 00:25:58.311 "data_offset": 0, 00:25:58.311 "data_size": 65536 00:25:58.311 } 00:25:58.311 ] 00:25:58.311 }' 00:25:58.311 15:19:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.311 15:19:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:58.876 15:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:58.876 [2024-07-23 15:19:54.293204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:59.134 [2024-07-23 15:19:54.336764] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002530 00:25:59.134 [2024-07-23 15:19:54.339067] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:59.134 15:19:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:25:59.134 [2024-07-23 15:19:54.447645] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:59.134 [2024-07-23 15:19:54.448101] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:59.392 [2024-07-23 15:19:54.665915] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:59.392 [2024-07-23 15:19:54.666535] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:59.650 [2024-07-23 15:19:55.035577] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:59.908 [2024-07-23 15:19:55.164853] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:00.165 "name": "raid_bdev1", 00:26:00.165 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:00.165 "strip_size_kb": 0, 00:26:00.165 "state": "online", 00:26:00.165 "raid_level": "raid1", 00:26:00.165 "superblock": false, 00:26:00.165 "num_base_bdevs": 4, 00:26:00.165 "num_base_bdevs_discovered": 4, 00:26:00.165 "num_base_bdevs_operational": 4, 00:26:00.165 "process": { 00:26:00.165 "type": "rebuild", 00:26:00.165 "target": "spare", 00:26:00.165 "progress": { 00:26:00.165 "blocks": 14336, 00:26:00.165 "percent": 21 00:26:00.165 } 00:26:00.165 }, 00:26:00.165 "base_bdevs_list": [ 00:26:00.165 { 00:26:00.165 "name": "spare", 00:26:00.165 "uuid": "4fdc53b9-7840-5539-a4fe-3e64e424f203", 00:26:00.165 "is_configured": true, 00:26:00.165 "data_offset": 0, 00:26:00.165 "data_size": 65536 00:26:00.165 }, 00:26:00.165 { 00:26:00.165 "name": "BaseBdev2", 00:26:00.165 "uuid": "404065ea-9a80-5945-967a-30f6ea372633", 00:26:00.165 "is_configured": true, 00:26:00.165 "data_offset": 0, 00:26:00.165 "data_size": 65536 00:26:00.165 }, 00:26:00.165 { 00:26:00.165 "name": "BaseBdev3", 00:26:00.165 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:00.165 "is_configured": true, 00:26:00.165 "data_offset": 0, 00:26:00.165 "data_size": 65536 00:26:00.165 }, 00:26:00.165 { 00:26:00.165 "name": "BaseBdev4", 00:26:00.165 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:00.165 "is_configured": true, 00:26:00.165 "data_offset": 0, 00:26:00.165 "data_size": 65536 00:26:00.165 } 00:26:00.165 ] 00:26:00.165 }' 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:00.165 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:00.423 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:00.423 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:00.423 [2024-07-23 15:19:55.640015] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:00.423 [2024-07-23 15:19:55.766489] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:00.423 [2024-07-23 15:19:55.767098] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:00.423 [2024-07-23 15:19:55.768091] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:00.423 [2024-07-23 15:19:55.783519] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.423 [2024-07-23 15:19:55.783568] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:00.424 [2024-07-23 15:19:55.783594] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:00.424 [2024-07-23 15:19:55.814862] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000002460 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.424 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.682 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.682 15:19:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.939 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.939 "name": "raid_bdev1", 00:26:00.939 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:00.939 "strip_size_kb": 0, 00:26:00.939 "state": "online", 00:26:00.939 "raid_level": "raid1", 00:26:00.939 "superblock": false, 00:26:00.939 "num_base_bdevs": 4, 00:26:00.939 "num_base_bdevs_discovered": 3, 00:26:00.939 "num_base_bdevs_operational": 3, 00:26:00.939 "base_bdevs_list": [ 00:26:00.939 { 00:26:00.939 "name": null, 00:26:00.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.939 "is_configured": false, 00:26:00.939 "data_offset": 0, 00:26:00.939 "data_size": 65536 00:26:00.939 }, 00:26:00.939 { 00:26:00.939 "name": "BaseBdev2", 00:26:00.939 "uuid": "404065ea-9a80-5945-967a-30f6ea372633", 00:26:00.939 "is_configured": true, 00:26:00.939 "data_offset": 0, 00:26:00.939 "data_size": 65536 00:26:00.939 }, 00:26:00.939 { 00:26:00.939 "name": "BaseBdev3", 00:26:00.939 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:00.939 "is_configured": true, 00:26:00.939 "data_offset": 0, 00:26:00.939 "data_size": 65536 00:26:00.939 }, 00:26:00.939 { 00:26:00.939 "name": "BaseBdev4", 00:26:00.939 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:00.939 "is_configured": true, 00:26:00.939 "data_offset": 0, 00:26:00.939 "data_size": 65536 00:26:00.939 } 00:26:00.939 ] 00:26:00.939 }' 00:26:00.939 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.939 15:19:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:01.197 "name": "raid_bdev1", 00:26:01.197 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:01.197 "strip_size_kb": 0, 00:26:01.197 "state": "online", 00:26:01.197 "raid_level": "raid1", 00:26:01.197 "superblock": false, 00:26:01.197 "num_base_bdevs": 4, 00:26:01.197 "num_base_bdevs_discovered": 3, 00:26:01.197 "num_base_bdevs_operational": 3, 00:26:01.197 "base_bdevs_list": [ 00:26:01.197 { 00:26:01.197 "name": null, 00:26:01.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.197 "is_configured": false, 00:26:01.197 "data_offset": 0, 00:26:01.197 "data_size": 65536 00:26:01.197 }, 00:26:01.197 { 00:26:01.197 "name": "BaseBdev2", 00:26:01.197 "uuid": "404065ea-9a80-5945-967a-30f6ea372633", 00:26:01.197 "is_configured": true, 00:26:01.197 "data_offset": 0, 00:26:01.197 "data_size": 65536 00:26:01.197 }, 00:26:01.197 { 00:26:01.197 "name": "BaseBdev3", 00:26:01.197 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:01.197 "is_configured": true, 00:26:01.197 "data_offset": 0, 00:26:01.197 "data_size": 65536 00:26:01.197 }, 00:26:01.197 { 00:26:01.197 "name": "BaseBdev4", 00:26:01.197 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:01.197 "is_configured": true, 00:26:01.197 "data_offset": 0, 00:26:01.197 "data_size": 65536 00:26:01.197 } 00:26:01.197 ] 00:26:01.197 }' 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:01.197 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:01.454 [2024-07-23 15:19:56.872915] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:01.712 15:19:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:01.712 [2024-07-23 15:19:56.943405] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:26:01.712 [2024-07-23 15:19:56.945854] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:01.712 [2024-07-23 15:19:57.047414] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:01.712 [2024-07-23 15:19:57.047873] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:01.969 [2024-07-23 15:19:57.170952] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:01.969 [2024-07-23 15:19:57.171630] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:02.226 [2024-07-23 15:19:57.492768] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:26:02.226 [2024-07-23 15:19:57.624617] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:02.504 [2024-07-23 15:19:57.864388] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:02.769 15:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.769 15:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:02.769 15:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:02.769 15:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:02.769 15:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:02.769 15:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.769 15:19:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.769 [2024-07-23 15:19:58.090021] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:02.769 "name": "raid_bdev1", 00:26:02.769 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:02.769 "strip_size_kb": 0, 00:26:02.769 "state": "online", 00:26:02.769 "raid_level": "raid1", 00:26:02.769 "superblock": false, 00:26:02.769 "num_base_bdevs": 4, 00:26:02.769 "num_base_bdevs_discovered": 4, 00:26:02.769 "num_base_bdevs_operational": 4, 00:26:02.769 "process": { 00:26:02.769 "type": "rebuild", 00:26:02.769 "target": "spare", 00:26:02.769 "progress": { 00:26:02.769 "blocks": 16384, 00:26:02.769 "percent": 25 00:26:02.769 } 00:26:02.769 }, 00:26:02.769 "base_bdevs_list": [ 00:26:02.769 { 00:26:02.769 "name": "spare", 00:26:02.769 "uuid": "4fdc53b9-7840-5539-a4fe-3e64e424f203", 00:26:02.769 "is_configured": true, 00:26:02.769 "data_offset": 0, 00:26:02.769 "data_size": 65536 00:26:02.769 }, 00:26:02.769 { 00:26:02.769 "name": "BaseBdev2", 00:26:02.769 "uuid": "404065ea-9a80-5945-967a-30f6ea372633", 00:26:02.769 "is_configured": true, 00:26:02.769 "data_offset": 0, 00:26:02.769 "data_size": 65536 00:26:02.769 }, 00:26:02.769 { 00:26:02.769 "name": "BaseBdev3", 00:26:02.769 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:02.769 "is_configured": true, 00:26:02.769 "data_offset": 0, 00:26:02.769 "data_size": 65536 00:26:02.769 }, 00:26:02.769 { 00:26:02.769 "name": "BaseBdev4", 00:26:02.769 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:02.769 "is_configured": true, 00:26:02.769 "data_offset": 0, 00:26:02.769 "data_size": 65536 00:26:02.769 } 00:26:02.769 ] 00:26:02.769 }' 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:26:02.769 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:03.026 [2024-07-23 15:19:58.414358] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:03.026 [2024-07-23 15:19:58.434546] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000002460 00:26:03.026 [2024-07-23 15:19:58.434598] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000002600 00:26:03.026 [2024-07-23 15:19:58.434978] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.283 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.283 "name": "raid_bdev1", 00:26:03.283 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:03.283 "strip_size_kb": 0, 00:26:03.283 "state": "online", 00:26:03.283 "raid_level": "raid1", 00:26:03.283 "superblock": false, 00:26:03.283 "num_base_bdevs": 4, 00:26:03.283 "num_base_bdevs_discovered": 3, 00:26:03.283 "num_base_bdevs_operational": 3, 00:26:03.283 "process": { 00:26:03.283 "type": "rebuild", 00:26:03.283 "target": "spare", 00:26:03.283 "progress": { 00:26:03.283 "blocks": 20480, 00:26:03.284 "percent": 31 00:26:03.284 } 00:26:03.284 }, 00:26:03.284 "base_bdevs_list": [ 00:26:03.284 { 00:26:03.284 "name": "spare", 00:26:03.284 "uuid": "4fdc53b9-7840-5539-a4fe-3e64e424f203", 00:26:03.284 "is_configured": true, 00:26:03.284 "data_offset": 0, 00:26:03.284 "data_size": 65536 00:26:03.284 }, 00:26:03.284 { 00:26:03.284 "name": null, 00:26:03.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.284 "is_configured": false, 00:26:03.284 "data_offset": 0, 00:26:03.284 "data_size": 65536 00:26:03.284 }, 00:26:03.284 { 00:26:03.284 "name": "BaseBdev3", 00:26:03.284 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:03.284 "is_configured": true, 00:26:03.284 "data_offset": 0, 00:26:03.284 "data_size": 65536 00:26:03.284 }, 00:26:03.284 { 00:26:03.284 "name": "BaseBdev4", 00:26:03.284 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:03.284 "is_configured": true, 00:26:03.284 "data_offset": 0, 00:26:03.284 "data_size": 65536 00:26:03.284 } 00:26:03.284 ] 00:26:03.284 }' 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:03.284 [2024-07-23 15:19:58.657316] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:03.284 [2024-07-23 15:19:58.657878] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=718 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.284 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.541 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.541 "name": "raid_bdev1", 00:26:03.541 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:03.541 "strip_size_kb": 0, 00:26:03.541 "state": "online", 00:26:03.541 "raid_level": "raid1", 00:26:03.541 "superblock": false, 00:26:03.541 "num_base_bdevs": 4, 00:26:03.541 "num_base_bdevs_discovered": 3, 00:26:03.541 "num_base_bdevs_operational": 3, 00:26:03.541 "process": { 00:26:03.541 "type": "rebuild", 00:26:03.541 "target": "spare", 00:26:03.541 "progress": { 00:26:03.541 "blocks": 22528, 00:26:03.541 "percent": 34 00:26:03.541 } 00:26:03.541 }, 00:26:03.541 "base_bdevs_list": [ 00:26:03.541 { 00:26:03.541 "name": "spare", 00:26:03.541 "uuid": "4fdc53b9-7840-5539-a4fe-3e64e424f203", 00:26:03.541 "is_configured": true, 00:26:03.541 "data_offset": 0, 00:26:03.541 "data_size": 65536 00:26:03.541 }, 00:26:03.541 { 00:26:03.541 "name": null, 00:26:03.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.541 "is_configured": false, 00:26:03.541 "data_offset": 0, 00:26:03.541 "data_size": 65536 00:26:03.541 }, 00:26:03.541 { 00:26:03.541 "name": "BaseBdev3", 00:26:03.541 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:03.541 "is_configured": true, 00:26:03.541 "data_offset": 0, 00:26:03.541 "data_size": 65536 00:26:03.541 }, 00:26:03.541 { 00:26:03.541 "name": "BaseBdev4", 00:26:03.541 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:03.541 "is_configured": true, 00:26:03.541 "data_offset": 0, 00:26:03.541 "data_size": 65536 00:26:03.541 } 00:26:03.541 ] 00:26:03.541 }' 00:26:03.541 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:03.541 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:03.541 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:03.541 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.541 15:19:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:04.474 [2024-07-23 15:19:59.702864] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:26:04.731 [2024-07-23 15:19:59.911781] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:26:04.731 15:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:04.731 15:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.731 15:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:04.731 15:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:04.731 15:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:04.731 15:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:04.731 15:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.731 15:19:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.989 [2024-07-23 15:20:00.216371] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:26:04.989 [2024-07-23 15:20:00.216876] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:26:04.989 15:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:04.989 "name": "raid_bdev1", 00:26:04.989 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:04.989 "strip_size_kb": 0, 00:26:04.989 "state": "online", 00:26:04.989 "raid_level": "raid1", 00:26:04.989 "superblock": false, 00:26:04.989 "num_base_bdevs": 4, 00:26:04.989 "num_base_bdevs_discovered": 3, 00:26:04.989 "num_base_bdevs_operational": 3, 00:26:04.989 "process": { 00:26:04.989 "type": "rebuild", 00:26:04.989 "target": "spare", 00:26:04.989 "progress": { 00:26:04.989 "blocks": 45056, 00:26:04.989 "percent": 68 00:26:04.989 } 00:26:04.989 }, 00:26:04.989 "base_bdevs_list": [ 00:26:04.989 { 00:26:04.989 "name": "spare", 00:26:04.989 "uuid": "4fdc53b9-7840-5539-a4fe-3e64e424f203", 00:26:04.989 "is_configured": true, 00:26:04.989 "data_offset": 0, 00:26:04.989 "data_size": 65536 00:26:04.989 }, 00:26:04.989 { 00:26:04.989 "name": null, 00:26:04.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.989 "is_configured": false, 00:26:04.990 "data_offset": 0, 00:26:04.990 "data_size": 65536 00:26:04.990 }, 00:26:04.990 { 00:26:04.990 "name": "BaseBdev3", 00:26:04.990 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:04.990 "is_configured": true, 00:26:04.990 "data_offset": 0, 00:26:04.990 "data_size": 65536 00:26:04.990 }, 00:26:04.990 { 00:26:04.990 "name": "BaseBdev4", 00:26:04.990 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:04.990 "is_configured": true, 00:26:04.990 "data_offset": 0, 00:26:04.990 "data_size": 65536 00:26:04.990 } 00:26:04.990 ] 00:26:04.990 }' 00:26:04.990 15:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:04.990 15:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:04.990 15:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:04.990 15:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:04.990 15:20:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:05.247 [2024-07-23 15:20:00.529724] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:26:05.247 [2024-07-23 15:20:00.637137] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.179 [2024-07-23 15:20:01.352084] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:06.179 [2024-07-23 15:20:01.412883] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:06.179 [2024-07-23 15:20:01.416746] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:06.179 "name": "raid_bdev1", 00:26:06.179 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:06.179 "strip_size_kb": 0, 00:26:06.179 "state": "online", 00:26:06.179 "raid_level": "raid1", 00:26:06.179 "superblock": false, 00:26:06.179 "num_base_bdevs": 4, 00:26:06.179 "num_base_bdevs_discovered": 3, 00:26:06.179 "num_base_bdevs_operational": 3, 00:26:06.179 "base_bdevs_list": [ 00:26:06.179 { 00:26:06.179 "name": "spare", 00:26:06.179 "uuid": "4fdc53b9-7840-5539-a4fe-3e64e424f203", 00:26:06.179 "is_configured": true, 00:26:06.179 "data_offset": 0, 00:26:06.179 "data_size": 65536 00:26:06.179 }, 00:26:06.179 { 00:26:06.179 "name": null, 00:26:06.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.179 "is_configured": false, 00:26:06.179 "data_offset": 0, 00:26:06.179 "data_size": 65536 00:26:06.179 }, 00:26:06.179 { 00:26:06.179 "name": "BaseBdev3", 00:26:06.179 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:06.179 "is_configured": true, 00:26:06.179 "data_offset": 0, 00:26:06.179 "data_size": 65536 00:26:06.179 }, 00:26:06.179 { 00:26:06.179 "name": "BaseBdev4", 00:26:06.179 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:06.179 "is_configured": true, 00:26:06.179 "data_offset": 0, 00:26:06.179 "data_size": 65536 00:26:06.179 } 00:26:06.179 ] 00:26:06.179 }' 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:06.179 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:06.180 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:06.180 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.180 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.437 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:06.437 "name": "raid_bdev1", 00:26:06.437 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:06.437 "strip_size_kb": 0, 00:26:06.437 "state": "online", 00:26:06.437 "raid_level": "raid1", 00:26:06.437 "superblock": false, 00:26:06.437 "num_base_bdevs": 4, 00:26:06.437 "num_base_bdevs_discovered": 3, 00:26:06.437 "num_base_bdevs_operational": 3, 00:26:06.437 "base_bdevs_list": [ 00:26:06.437 { 00:26:06.437 "name": "spare", 00:26:06.437 "uuid": "4fdc53b9-7840-5539-a4fe-3e64e424f203", 00:26:06.437 "is_configured": true, 00:26:06.437 "data_offset": 0, 00:26:06.437 "data_size": 65536 00:26:06.437 }, 00:26:06.437 { 00:26:06.437 "name": null, 00:26:06.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.437 "is_configured": false, 00:26:06.437 "data_offset": 0, 00:26:06.437 "data_size": 65536 00:26:06.437 }, 00:26:06.437 { 00:26:06.437 "name": "BaseBdev3", 00:26:06.437 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:06.437 "is_configured": true, 00:26:06.437 "data_offset": 0, 00:26:06.437 "data_size": 65536 00:26:06.437 }, 00:26:06.437 { 00:26:06.437 "name": "BaseBdev4", 00:26:06.437 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:06.437 "is_configured": true, 00:26:06.437 "data_offset": 0, 00:26:06.437 "data_size": 65536 00:26:06.437 } 00:26:06.437 ] 00:26:06.437 }' 00:26:06.437 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:06.437 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:06.437 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:06.437 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.438 15:20:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.695 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.695 "name": "raid_bdev1", 00:26:06.695 "uuid": "c89bbcde-2154-4315-8d7d-e230b9a916db", 00:26:06.695 "strip_size_kb": 0, 00:26:06.695 "state": "online", 00:26:06.695 "raid_level": "raid1", 00:26:06.695 "superblock": false, 00:26:06.695 "num_base_bdevs": 4, 00:26:06.695 "num_base_bdevs_discovered": 3, 00:26:06.695 "num_base_bdevs_operational": 3, 00:26:06.695 "base_bdevs_list": [ 00:26:06.695 { 00:26:06.695 "name": "spare", 00:26:06.695 "uuid": "4fdc53b9-7840-5539-a4fe-3e64e424f203", 00:26:06.695 "is_configured": true, 00:26:06.695 "data_offset": 0, 00:26:06.695 "data_size": 65536 00:26:06.695 }, 00:26:06.695 { 00:26:06.695 "name": null, 00:26:06.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.695 "is_configured": false, 00:26:06.695 "data_offset": 0, 00:26:06.695 "data_size": 65536 00:26:06.695 }, 00:26:06.695 { 00:26:06.695 "name": "BaseBdev3", 00:26:06.695 "uuid": "a9715dbd-e51c-57f1-816d-4585e7f662b6", 00:26:06.695 "is_configured": true, 00:26:06.695 "data_offset": 0, 00:26:06.695 "data_size": 65536 00:26:06.695 }, 00:26:06.695 { 00:26:06.695 "name": "BaseBdev4", 00:26:06.695 "uuid": "0c0b347a-beb5-5d59-909d-75bb82721bbb", 00:26:06.695 "is_configured": true, 00:26:06.695 "data_offset": 0, 00:26:06.695 "data_size": 65536 00:26:06.695 } 00:26:06.695 ] 00:26:06.695 }' 00:26:06.695 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.695 15:20:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:07.261 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:07.261 [2024-07-23 15:20:02.655593] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:07.261 [2024-07-23 15:20:02.655643] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:07.518 00:26:07.518 Latency(us) 00:26:07.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.518 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:26:07.518 raid_bdev1 : 9.36 102.01 306.03 0.00 0.00 13502.20 284.77 118838.61 00:26:07.518 =================================================================================================================== 00:26:07.518 Total : 102.01 306.03 0.00 0.00 13502.20 284.77 118838.61 00:26:07.518 [2024-07-23 15:20:02.719219] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.518 [2024-07-23 15:20:02.719276] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:07.518 [2024-07-23 15:20:02.719380] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:07.518 [2024-07-23 15:20:02.719399] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:26:07.518 0 00:26:07.518 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.518 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:07.776 15:20:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:07.776 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:07.776 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:07.776 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:07.776 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:26:08.033 /dev/nbd0 00:26:08.033 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:08.033 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:08.033 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:08.033 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:08.034 1+0 records in 00:26:08.034 1+0 records out 00:26:08.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291883 s, 14.0 MB/s 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:08.034 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:08.034 /dev/nbd1 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:08.291 1+0 records in 00:26:08.291 1+0 records out 00:26:08.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002329 s, 17.6 MB/s 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:08.291 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:08.292 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:08.549 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:08.549 /dev/nbd1 00:26:08.807 15:20:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:08.807 1+0 records in 00:26:08.807 1+0 records out 00:26:08.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250899 s, 16.3 MB/s 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:08.807 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 111218 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 111218 ']' 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 111218 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.074 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111218 00:26:09.349 killing process with pid 111218 00:26:09.349 Received shutdown signal, test time was about 11.169622 seconds 00:26:09.349 00:26:09.349 Latency(us) 00:26:09.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.349 =================================================================================================================== 00:26:09.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.349 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:09.349 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:09.349 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111218' 00:26:09.349 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 111218 00:26:09.349 [2024-07-23 15:20:04.523412] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:09.349 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 111218 00:26:09.349 [2024-07-23 15:20:04.569739] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:26:09.608 00:26:09.608 real 0m15.748s 00:26:09.608 user 0m23.356s 00:26:09.608 sys 0m2.904s 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:26:09.608 ************************************ 00:26:09.608 END TEST raid_rebuild_test_io 00:26:09.608 ************************************ 00:26:09.608 15:20:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:09.608 15:20:04 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:26:09.608 15:20:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:09.608 15:20:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.608 15:20:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.608 ************************************ 00:26:09.608 START TEST raid_rebuild_test_sb_io 00:26:09.608 ************************************ 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=111676 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 111676 /var/tmp/spdk-raid.sock 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 111676 ']' 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.608 15:20:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:09.608 [2024-07-23 15:20:04.933933] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:26:09.609 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:09.609 Zero copy mechanism will not be used. 00:26:09.609 [2024-07-23 15:20:04.934082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111676 ] 00:26:09.867 [2024-07-23 15:20:05.075761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.867 [2024-07-23 15:20:05.120337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.867 [2024-07-23 15:20:05.165163] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.802 15:20:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.802 15:20:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:26:10.802 15:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:10.802 15:20:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:10.802 BaseBdev1_malloc 00:26:10.802 15:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:10.802 [2024-07-23 15:20:06.200323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:10.802 [2024-07-23 15:20:06.200412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.802 [2024-07-23 15:20:06.200447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:26:10.802 [2024-07-23 15:20:06.200467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.802 [2024-07-23 15:20:06.203047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.802 [2024-07-23 15:20:06.203100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:10.802 BaseBdev1 00:26:10.802 15:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:10.802 15:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:11.060 BaseBdev2_malloc 00:26:11.060 15:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:11.317 [2024-07-23 15:20:06.617935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:11.317 [2024-07-23 15:20:06.618008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.317 [2024-07-23 15:20:06.618039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:26:11.317 [2024-07-23 15:20:06.618051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.317 [2024-07-23 15:20:06.620492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.317 [2024-07-23 15:20:06.620533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:11.317 BaseBdev2 00:26:11.317 15:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:11.318 15:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:11.575 BaseBdev3_malloc 00:26:11.575 15:20:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:11.575 [2024-07-23 15:20:06.995624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:11.575 [2024-07-23 15:20:06.995698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.575 [2024-07-23 15:20:06.995730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:26:11.575 [2024-07-23 15:20:06.995742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.575 [2024-07-23 15:20:06.998215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.575 [2024-07-23 15:20:06.998258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:11.575 BaseBdev3 00:26:11.833 15:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:11.833 15:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:11.833 BaseBdev4_malloc 00:26:11.833 15:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:12.091 [2024-07-23 15:20:07.353104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:12.091 [2024-07-23 15:20:07.353182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.092 [2024-07-23 15:20:07.353213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:26:12.092 [2024-07-23 15:20:07.353226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.092 [2024-07-23 15:20:07.355751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.092 [2024-07-23 15:20:07.355807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:12.092 BaseBdev4 00:26:12.092 15:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:12.350 spare_malloc 00:26:12.350 15:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:12.350 spare_delay 00:26:12.350 15:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:12.608 [2024-07-23 15:20:07.874694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:12.608 [2024-07-23 15:20:07.874784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.608 [2024-07-23 15:20:07.874838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:26:12.608 [2024-07-23 15:20:07.874851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.608 [2024-07-23 15:20:07.877486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.608 [2024-07-23 15:20:07.877525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:12.608 spare 00:26:12.608 15:20:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:12.866 [2024-07-23 15:20:08.046830] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:12.866 [2024-07-23 15:20:08.048967] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:12.866 [2024-07-23 15:20:08.049037] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:12.866 [2024-07-23 15:20:08.049080] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:12.866 [2024-07-23 15:20:08.049267] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:26:12.866 [2024-07-23 15:20:08.049278] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:12.866 [2024-07-23 15:20:08.049415] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:26:12.866 [2024-07-23 15:20:08.049826] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:26:12.866 [2024-07-23 15:20:08.049856] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:26:12.866 [2024-07-23 15:20:08.049986] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.866 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.125 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.125 "name": "raid_bdev1", 00:26:13.125 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:13.125 "strip_size_kb": 0, 00:26:13.125 "state": "online", 00:26:13.125 "raid_level": "raid1", 00:26:13.125 "superblock": true, 00:26:13.125 "num_base_bdevs": 4, 00:26:13.125 "num_base_bdevs_discovered": 4, 00:26:13.125 "num_base_bdevs_operational": 4, 00:26:13.125 "base_bdevs_list": [ 00:26:13.125 { 00:26:13.125 "name": "BaseBdev1", 00:26:13.125 "uuid": "ecc92bd2-87a1-5964-b3ff-ee32f87d3f6c", 00:26:13.125 "is_configured": true, 00:26:13.125 "data_offset": 2048, 00:26:13.125 "data_size": 63488 00:26:13.125 }, 00:26:13.125 { 00:26:13.125 "name": "BaseBdev2", 00:26:13.125 "uuid": "787aed7c-e5a1-5310-b693-45a3031524a3", 00:26:13.125 "is_configured": true, 00:26:13.125 "data_offset": 2048, 00:26:13.125 "data_size": 63488 00:26:13.125 }, 00:26:13.125 { 00:26:13.125 "name": "BaseBdev3", 00:26:13.125 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:13.125 "is_configured": true, 00:26:13.125 "data_offset": 2048, 00:26:13.125 "data_size": 63488 00:26:13.125 }, 00:26:13.125 { 00:26:13.125 "name": "BaseBdev4", 00:26:13.125 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:13.125 "is_configured": true, 00:26:13.125 "data_offset": 2048, 00:26:13.125 "data_size": 63488 00:26:13.125 } 00:26:13.125 ] 00:26:13.125 }' 00:26:13.125 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.125 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.383 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:13.383 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:13.641 [2024-07-23 15:20:08.815201] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:13.641 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:26:13.641 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:13.641 15:20:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.899 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:26:13.899 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:26:13.899 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:13.899 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:13.899 [2024-07-23 15:20:09.177149] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:26:13.899 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:13.899 Zero copy mechanism will not be used. 00:26:13.899 Running I/O for 60 seconds... 00:26:14.157 [2024-07-23 15:20:09.343624] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:14.157 [2024-07-23 15:20:09.349576] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000002460 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.157 "name": "raid_bdev1", 00:26:14.157 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:14.157 "strip_size_kb": 0, 00:26:14.157 "state": "online", 00:26:14.157 "raid_level": "raid1", 00:26:14.157 "superblock": true, 00:26:14.157 "num_base_bdevs": 4, 00:26:14.157 "num_base_bdevs_discovered": 3, 00:26:14.157 "num_base_bdevs_operational": 3, 00:26:14.157 "base_bdevs_list": [ 00:26:14.157 { 00:26:14.157 "name": null, 00:26:14.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.157 "is_configured": false, 00:26:14.157 "data_offset": 2048, 00:26:14.157 "data_size": 63488 00:26:14.157 }, 00:26:14.157 { 00:26:14.157 "name": "BaseBdev2", 00:26:14.157 "uuid": "787aed7c-e5a1-5310-b693-45a3031524a3", 00:26:14.157 "is_configured": true, 00:26:14.157 "data_offset": 2048, 00:26:14.157 "data_size": 63488 00:26:14.157 }, 00:26:14.157 { 00:26:14.157 "name": "BaseBdev3", 00:26:14.157 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:14.157 "is_configured": true, 00:26:14.157 "data_offset": 2048, 00:26:14.157 "data_size": 63488 00:26:14.157 }, 00:26:14.157 { 00:26:14.157 "name": "BaseBdev4", 00:26:14.157 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:14.157 "is_configured": true, 00:26:14.157 "data_offset": 2048, 00:26:14.157 "data_size": 63488 00:26:14.157 } 00:26:14.157 ] 00:26:14.157 }' 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.157 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:14.723 15:20:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:14.723 [2024-07-23 15:20:10.103133] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:14.723 [2024-07-23 15:20:10.147643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002530 00:26:14.723 15:20:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:14.723 [2024-07-23 15:20:10.149940] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:14.982 [2024-07-23 15:20:10.260066] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:14.982 [2024-07-23 15:20:10.261344] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:15.241 [2024-07-23 15:20:10.479608] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:15.241 [2024-07-23 15:20:10.479957] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:15.500 [2024-07-23 15:20:10.881325] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:15.758 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.758 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:15.758 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:15.758 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:15.758 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:15.758 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.758 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.031 [2024-07-23 15:20:11.216677] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:16.031 [2024-07-23 15:20:11.217246] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:16.031 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:16.031 "name": "raid_bdev1", 00:26:16.031 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:16.031 "strip_size_kb": 0, 00:26:16.031 "state": "online", 00:26:16.031 "raid_level": "raid1", 00:26:16.031 "superblock": true, 00:26:16.031 "num_base_bdevs": 4, 00:26:16.031 "num_base_bdevs_discovered": 4, 00:26:16.031 "num_base_bdevs_operational": 4, 00:26:16.031 "process": { 00:26:16.031 "type": "rebuild", 00:26:16.031 "target": "spare", 00:26:16.031 "progress": { 00:26:16.031 "blocks": 16384, 00:26:16.031 "percent": 25 00:26:16.031 } 00:26:16.031 }, 00:26:16.031 "base_bdevs_list": [ 00:26:16.031 { 00:26:16.031 "name": "spare", 00:26:16.031 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:16.031 "is_configured": true, 00:26:16.031 "data_offset": 2048, 00:26:16.031 "data_size": 63488 00:26:16.031 }, 00:26:16.031 { 00:26:16.031 "name": "BaseBdev2", 00:26:16.031 "uuid": "787aed7c-e5a1-5310-b693-45a3031524a3", 00:26:16.031 "is_configured": true, 00:26:16.031 "data_offset": 2048, 00:26:16.031 "data_size": 63488 00:26:16.031 }, 00:26:16.031 { 00:26:16.031 "name": "BaseBdev3", 00:26:16.031 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:16.031 "is_configured": true, 00:26:16.031 "data_offset": 2048, 00:26:16.031 "data_size": 63488 00:26:16.031 }, 00:26:16.031 { 00:26:16.031 "name": "BaseBdev4", 00:26:16.031 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:16.031 "is_configured": true, 00:26:16.031 "data_offset": 2048, 00:26:16.031 "data_size": 63488 00:26:16.031 } 00:26:16.031 ] 00:26:16.031 }' 00:26:16.031 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:16.031 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:16.031 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:16.031 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:16.031 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:16.289 [2024-07-23 15:20:11.640686] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:16.289 [2024-07-23 15:20:11.657941] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:16.546 [2024-07-23 15:20:11.856930] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:16.546 [2024-07-23 15:20:11.866843] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.546 [2024-07-23 15:20:11.866891] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:16.546 [2024-07-23 15:20:11.866917] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:16.546 [2024-07-23 15:20:11.880374] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000002460 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.546 15:20:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.805 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.805 "name": "raid_bdev1", 00:26:16.805 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:16.805 "strip_size_kb": 0, 00:26:16.805 "state": "online", 00:26:16.805 "raid_level": "raid1", 00:26:16.805 "superblock": true, 00:26:16.805 "num_base_bdevs": 4, 00:26:16.805 "num_base_bdevs_discovered": 3, 00:26:16.805 "num_base_bdevs_operational": 3, 00:26:16.805 "base_bdevs_list": [ 00:26:16.805 { 00:26:16.805 "name": null, 00:26:16.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.805 "is_configured": false, 00:26:16.805 "data_offset": 2048, 00:26:16.805 "data_size": 63488 00:26:16.805 }, 00:26:16.805 { 00:26:16.805 "name": "BaseBdev2", 00:26:16.805 "uuid": "787aed7c-e5a1-5310-b693-45a3031524a3", 00:26:16.805 "is_configured": true, 00:26:16.805 "data_offset": 2048, 00:26:16.805 "data_size": 63488 00:26:16.805 }, 00:26:16.805 { 00:26:16.805 "name": "BaseBdev3", 00:26:16.805 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:16.805 "is_configured": true, 00:26:16.805 "data_offset": 2048, 00:26:16.805 "data_size": 63488 00:26:16.805 }, 00:26:16.805 { 00:26:16.805 "name": "BaseBdev4", 00:26:16.805 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:16.805 "is_configured": true, 00:26:16.805 "data_offset": 2048, 00:26:16.805 "data_size": 63488 00:26:16.805 } 00:26:16.805 ] 00:26:16.805 }' 00:26:16.805 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.805 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:17.063 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:17.063 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:17.063 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:17.063 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:17.063 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:17.063 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.063 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.323 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:17.323 "name": "raid_bdev1", 00:26:17.323 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:17.323 "strip_size_kb": 0, 00:26:17.323 "state": "online", 00:26:17.323 "raid_level": "raid1", 00:26:17.323 "superblock": true, 00:26:17.323 "num_base_bdevs": 4, 00:26:17.323 "num_base_bdevs_discovered": 3, 00:26:17.323 "num_base_bdevs_operational": 3, 00:26:17.323 "base_bdevs_list": [ 00:26:17.323 { 00:26:17.323 "name": null, 00:26:17.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.323 "is_configured": false, 00:26:17.323 "data_offset": 2048, 00:26:17.323 "data_size": 63488 00:26:17.323 }, 00:26:17.323 { 00:26:17.323 "name": "BaseBdev2", 00:26:17.323 "uuid": "787aed7c-e5a1-5310-b693-45a3031524a3", 00:26:17.323 "is_configured": true, 00:26:17.323 "data_offset": 2048, 00:26:17.323 "data_size": 63488 00:26:17.323 }, 00:26:17.323 { 00:26:17.323 "name": "BaseBdev3", 00:26:17.323 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:17.323 "is_configured": true, 00:26:17.323 "data_offset": 2048, 00:26:17.323 "data_size": 63488 00:26:17.323 }, 00:26:17.323 { 00:26:17.323 "name": "BaseBdev4", 00:26:17.323 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:17.323 "is_configured": true, 00:26:17.323 "data_offset": 2048, 00:26:17.323 "data_size": 63488 00:26:17.323 } 00:26:17.323 ] 00:26:17.323 }' 00:26:17.323 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:17.581 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:17.582 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:17.582 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:17.582 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:17.582 [2024-07-23 15:20:12.943271] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:17.582 [2024-07-23 15:20:12.980683] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:26:17.582 15:20:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:17.582 [2024-07-23 15:20:12.983423] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:17.840 [2024-07-23 15:20:13.084966] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:17.840 [2024-07-23 15:20:13.085497] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:26:18.099 [2024-07-23 15:20:13.294804] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:18.099 [2024-07-23 15:20:13.295361] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:26:18.356 [2024-07-23 15:20:13.778034] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:26:18.613 15:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:18.613 15:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:18.613 15:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:18.614 15:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:18.614 15:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:18.614 15:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.614 15:20:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.871 [2024-07-23 15:20:14.115893] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:26:18.871 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:18.871 "name": "raid_bdev1", 00:26:18.871 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:18.871 "strip_size_kb": 0, 00:26:18.871 "state": "online", 00:26:18.871 "raid_level": "raid1", 00:26:18.871 "superblock": true, 00:26:18.871 "num_base_bdevs": 4, 00:26:18.871 "num_base_bdevs_discovered": 4, 00:26:18.871 "num_base_bdevs_operational": 4, 00:26:18.871 "process": { 00:26:18.871 "type": "rebuild", 00:26:18.871 "target": "spare", 00:26:18.871 "progress": { 00:26:18.871 "blocks": 14336, 00:26:18.871 "percent": 22 00:26:18.871 } 00:26:18.871 }, 00:26:18.871 "base_bdevs_list": [ 00:26:18.871 { 00:26:18.871 "name": "spare", 00:26:18.871 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:18.871 "is_configured": true, 00:26:18.871 "data_offset": 2048, 00:26:18.871 "data_size": 63488 00:26:18.871 }, 00:26:18.871 { 00:26:18.871 "name": "BaseBdev2", 00:26:18.871 "uuid": "787aed7c-e5a1-5310-b693-45a3031524a3", 00:26:18.871 "is_configured": true, 00:26:18.871 "data_offset": 2048, 00:26:18.871 "data_size": 63488 00:26:18.871 }, 00:26:18.871 { 00:26:18.871 "name": "BaseBdev3", 00:26:18.871 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:18.871 "is_configured": true, 00:26:18.871 "data_offset": 2048, 00:26:18.871 "data_size": 63488 00:26:18.871 }, 00:26:18.871 { 00:26:18.871 "name": "BaseBdev4", 00:26:18.871 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:18.871 "is_configured": true, 00:26:18.871 "data_offset": 2048, 00:26:18.871 "data_size": 63488 00:26:18.871 } 00:26:18.871 ] 00:26:18.871 }' 00:26:18.871 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:18.871 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:18.871 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:18.871 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:18.871 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:18.871 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:18.871 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:18.872 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:26:18.872 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:18.872 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:26:18.872 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:19.130 [2024-07-23 15:20:14.340708] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:19.130 [2024-07-23 15:20:14.341612] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:26:19.130 [2024-07-23 15:20:14.481353] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:19.389 [2024-07-23 15:20:14.766261] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000002460 00:26:19.389 [2024-07-23 15:20:14.766305] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000002600 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.389 15:20:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.647 [2024-07-23 15:20:14.897206] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:26:19.647 [2024-07-23 15:20:14.898366] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:26:19.647 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:19.647 "name": "raid_bdev1", 00:26:19.647 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:19.647 "strip_size_kb": 0, 00:26:19.647 "state": "online", 00:26:19.647 "raid_level": "raid1", 00:26:19.647 "superblock": true, 00:26:19.647 "num_base_bdevs": 4, 00:26:19.647 "num_base_bdevs_discovered": 3, 00:26:19.647 "num_base_bdevs_operational": 3, 00:26:19.647 "process": { 00:26:19.647 "type": "rebuild", 00:26:19.647 "target": "spare", 00:26:19.647 "progress": { 00:26:19.647 "blocks": 20480, 00:26:19.647 "percent": 32 00:26:19.648 } 00:26:19.648 }, 00:26:19.648 "base_bdevs_list": [ 00:26:19.648 { 00:26:19.648 "name": "spare", 00:26:19.648 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:19.648 "is_configured": true, 00:26:19.648 "data_offset": 2048, 00:26:19.648 "data_size": 63488 00:26:19.648 }, 00:26:19.648 { 00:26:19.648 "name": null, 00:26:19.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.648 "is_configured": false, 00:26:19.648 "data_offset": 2048, 00:26:19.648 "data_size": 63488 00:26:19.648 }, 00:26:19.648 { 00:26:19.648 "name": "BaseBdev3", 00:26:19.648 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:19.648 "is_configured": true, 00:26:19.648 "data_offset": 2048, 00:26:19.648 "data_size": 63488 00:26:19.648 }, 00:26:19.648 { 00:26:19.648 "name": "BaseBdev4", 00:26:19.648 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:19.648 "is_configured": true, 00:26:19.648 "data_offset": 2048, 00:26:19.648 "data_size": 63488 00:26:19.648 } 00:26:19.648 ] 00:26:19.648 }' 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=735 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:19.648 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:19.907 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.907 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.907 [2024-07-23 15:20:15.128586] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:26:19.907 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:19.907 "name": "raid_bdev1", 00:26:19.907 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:19.907 "strip_size_kb": 0, 00:26:19.907 "state": "online", 00:26:19.907 "raid_level": "raid1", 00:26:19.907 "superblock": true, 00:26:19.907 "num_base_bdevs": 4, 00:26:19.907 "num_base_bdevs_discovered": 3, 00:26:19.907 "num_base_bdevs_operational": 3, 00:26:19.907 "process": { 00:26:19.907 "type": "rebuild", 00:26:19.907 "target": "spare", 00:26:19.907 "progress": { 00:26:19.907 "blocks": 22528, 00:26:19.907 "percent": 35 00:26:19.907 } 00:26:19.907 }, 00:26:19.907 "base_bdevs_list": [ 00:26:19.907 { 00:26:19.907 "name": "spare", 00:26:19.907 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:19.907 "is_configured": true, 00:26:19.907 "data_offset": 2048, 00:26:19.907 "data_size": 63488 00:26:19.907 }, 00:26:19.907 { 00:26:19.907 "name": null, 00:26:19.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.907 "is_configured": false, 00:26:19.907 "data_offset": 2048, 00:26:19.907 "data_size": 63488 00:26:19.907 }, 00:26:19.907 { 00:26:19.907 "name": "BaseBdev3", 00:26:19.907 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:19.907 "is_configured": true, 00:26:19.907 "data_offset": 2048, 00:26:19.907 "data_size": 63488 00:26:19.907 }, 00:26:19.907 { 00:26:19.907 "name": "BaseBdev4", 00:26:19.907 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:19.907 "is_configured": true, 00:26:19.907 "data_offset": 2048, 00:26:19.907 "data_size": 63488 00:26:19.907 } 00:26:19.907 ] 00:26:19.907 }' 00:26:19.907 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:19.907 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.907 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:19.907 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.907 15:20:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:20.165 [2024-07-23 15:20:15.464208] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:26:20.165 [2024-07-23 15:20:15.579515] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:26:20.165 [2024-07-23 15:20:15.580170] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:26:20.733 [2024-07-23 15:20:15.912168] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:26:20.992 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:20.992 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:20.992 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:20.992 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:20.992 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:20.992 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:20.992 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.992 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.251 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:21.251 "name": "raid_bdev1", 00:26:21.251 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:21.251 "strip_size_kb": 0, 00:26:21.251 "state": "online", 00:26:21.251 "raid_level": "raid1", 00:26:21.251 "superblock": true, 00:26:21.251 "num_base_bdevs": 4, 00:26:21.251 "num_base_bdevs_discovered": 3, 00:26:21.251 "num_base_bdevs_operational": 3, 00:26:21.251 "process": { 00:26:21.251 "type": "rebuild", 00:26:21.251 "target": "spare", 00:26:21.251 "progress": { 00:26:21.251 "blocks": 43008, 00:26:21.251 "percent": 67 00:26:21.251 } 00:26:21.251 }, 00:26:21.251 "base_bdevs_list": [ 00:26:21.251 { 00:26:21.251 "name": "spare", 00:26:21.251 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:21.251 "is_configured": true, 00:26:21.251 "data_offset": 2048, 00:26:21.251 "data_size": 63488 00:26:21.251 }, 00:26:21.251 { 00:26:21.251 "name": null, 00:26:21.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.251 "is_configured": false, 00:26:21.251 "data_offset": 2048, 00:26:21.251 "data_size": 63488 00:26:21.251 }, 00:26:21.251 { 00:26:21.251 "name": "BaseBdev3", 00:26:21.251 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:21.251 "is_configured": true, 00:26:21.251 "data_offset": 2048, 00:26:21.251 "data_size": 63488 00:26:21.251 }, 00:26:21.251 { 00:26:21.251 "name": "BaseBdev4", 00:26:21.251 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:21.251 "is_configured": true, 00:26:21.251 "data_offset": 2048, 00:26:21.251 "data_size": 63488 00:26:21.251 } 00:26:21.251 ] 00:26:21.251 }' 00:26:21.251 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:21.251 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:21.251 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:21.251 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:21.251 15:20:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:21.510 [2024-07-23 15:20:16.903736] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:26:21.768 [2024-07-23 15:20:17.132580] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:26:22.026 [2024-07-23 15:20:17.350886] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:26:22.285 [2024-07-23 15:20:17.568095] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:26:22.285 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:22.285 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:22.285 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:22.285 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:22.285 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:22.285 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:22.285 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.285 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.544 [2024-07-23 15:20:17.794490] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:22.544 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:22.544 "name": "raid_bdev1", 00:26:22.544 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:22.544 "strip_size_kb": 0, 00:26:22.544 "state": "online", 00:26:22.544 "raid_level": "raid1", 00:26:22.544 "superblock": true, 00:26:22.544 "num_base_bdevs": 4, 00:26:22.544 "num_base_bdevs_discovered": 3, 00:26:22.544 "num_base_bdevs_operational": 3, 00:26:22.544 "process": { 00:26:22.544 "type": "rebuild", 00:26:22.544 "target": "spare", 00:26:22.544 "progress": { 00:26:22.544 "blocks": 63488, 00:26:22.544 "percent": 100 00:26:22.544 } 00:26:22.544 }, 00:26:22.544 "base_bdevs_list": [ 00:26:22.544 { 00:26:22.544 "name": "spare", 00:26:22.544 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:22.544 "is_configured": true, 00:26:22.544 "data_offset": 2048, 00:26:22.544 "data_size": 63488 00:26:22.544 }, 00:26:22.544 { 00:26:22.544 "name": null, 00:26:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.544 "is_configured": false, 00:26:22.544 "data_offset": 2048, 00:26:22.544 "data_size": 63488 00:26:22.544 }, 00:26:22.544 { 00:26:22.544 "name": "BaseBdev3", 00:26:22.544 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:22.544 "is_configured": true, 00:26:22.544 "data_offset": 2048, 00:26:22.544 "data_size": 63488 00:26:22.544 }, 00:26:22.544 { 00:26:22.544 "name": "BaseBdev4", 00:26:22.544 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:22.544 "is_configured": true, 00:26:22.544 "data_offset": 2048, 00:26:22.544 "data_size": 63488 00:26:22.544 } 00:26:22.544 ] 00:26:22.544 }' 00:26:22.544 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:22.544 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:22.544 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:22.544 [2024-07-23 15:20:17.900631] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:22.544 [2024-07-23 15:20:17.904405] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.544 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:22.544 15:20:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:23.919 15:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:23.919 15:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:23.919 15:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:23.920 15:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:23.920 15:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:23.920 15:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:23.920 15:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.920 15:20:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:23.920 "name": "raid_bdev1", 00:26:23.920 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:23.920 "strip_size_kb": 0, 00:26:23.920 "state": "online", 00:26:23.920 "raid_level": "raid1", 00:26:23.920 "superblock": true, 00:26:23.920 "num_base_bdevs": 4, 00:26:23.920 "num_base_bdevs_discovered": 3, 00:26:23.920 "num_base_bdevs_operational": 3, 00:26:23.920 "base_bdevs_list": [ 00:26:23.920 { 00:26:23.920 "name": "spare", 00:26:23.920 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:23.920 "is_configured": true, 00:26:23.920 "data_offset": 2048, 00:26:23.920 "data_size": 63488 00:26:23.920 }, 00:26:23.920 { 00:26:23.920 "name": null, 00:26:23.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.920 "is_configured": false, 00:26:23.920 "data_offset": 2048, 00:26:23.920 "data_size": 63488 00:26:23.920 }, 00:26:23.920 { 00:26:23.920 "name": "BaseBdev3", 00:26:23.920 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:23.920 "is_configured": true, 00:26:23.920 "data_offset": 2048, 00:26:23.920 "data_size": 63488 00:26:23.920 }, 00:26:23.920 { 00:26:23.920 "name": "BaseBdev4", 00:26:23.920 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:23.920 "is_configured": true, 00:26:23.920 "data_offset": 2048, 00:26:23.920 "data_size": 63488 00:26:23.920 } 00:26:23.920 ] 00:26:23.920 }' 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.920 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:24.178 "name": "raid_bdev1", 00:26:24.178 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:24.178 "strip_size_kb": 0, 00:26:24.178 "state": "online", 00:26:24.178 "raid_level": "raid1", 00:26:24.178 "superblock": true, 00:26:24.178 "num_base_bdevs": 4, 00:26:24.178 "num_base_bdevs_discovered": 3, 00:26:24.178 "num_base_bdevs_operational": 3, 00:26:24.178 "base_bdevs_list": [ 00:26:24.178 { 00:26:24.178 "name": "spare", 00:26:24.178 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:24.178 "is_configured": true, 00:26:24.178 "data_offset": 2048, 00:26:24.178 "data_size": 63488 00:26:24.178 }, 00:26:24.178 { 00:26:24.178 "name": null, 00:26:24.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.178 "is_configured": false, 00:26:24.178 "data_offset": 2048, 00:26:24.178 "data_size": 63488 00:26:24.178 }, 00:26:24.178 { 00:26:24.178 "name": "BaseBdev3", 00:26:24.178 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:24.178 "is_configured": true, 00:26:24.178 "data_offset": 2048, 00:26:24.178 "data_size": 63488 00:26:24.178 }, 00:26:24.178 { 00:26:24.178 "name": "BaseBdev4", 00:26:24.178 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:24.178 "is_configured": true, 00:26:24.178 "data_offset": 2048, 00:26:24.178 "data_size": 63488 00:26:24.178 } 00:26:24.178 ] 00:26:24.178 }' 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.178 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.437 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.437 "name": "raid_bdev1", 00:26:24.437 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:24.437 "strip_size_kb": 0, 00:26:24.437 "state": "online", 00:26:24.437 "raid_level": "raid1", 00:26:24.437 "superblock": true, 00:26:24.437 "num_base_bdevs": 4, 00:26:24.437 "num_base_bdevs_discovered": 3, 00:26:24.437 "num_base_bdevs_operational": 3, 00:26:24.437 "base_bdevs_list": [ 00:26:24.437 { 00:26:24.437 "name": "spare", 00:26:24.437 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:24.437 "is_configured": true, 00:26:24.437 "data_offset": 2048, 00:26:24.437 "data_size": 63488 00:26:24.437 }, 00:26:24.437 { 00:26:24.437 "name": null, 00:26:24.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.437 "is_configured": false, 00:26:24.437 "data_offset": 2048, 00:26:24.437 "data_size": 63488 00:26:24.437 }, 00:26:24.437 { 00:26:24.437 "name": "BaseBdev3", 00:26:24.437 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:24.437 "is_configured": true, 00:26:24.437 "data_offset": 2048, 00:26:24.437 "data_size": 63488 00:26:24.437 }, 00:26:24.437 { 00:26:24.437 "name": "BaseBdev4", 00:26:24.437 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:24.437 "is_configured": true, 00:26:24.437 "data_offset": 2048, 00:26:24.437 "data_size": 63488 00:26:24.437 } 00:26:24.437 ] 00:26:24.437 }' 00:26:24.437 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.437 15:20:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:24.695 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:24.953 [2024-07-23 15:20:20.336776] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:24.953 [2024-07-23 15:20:20.336984] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:25.212 00:26:25.212 Latency(us) 00:26:25.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.212 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:26:25.212 raid_bdev1 : 11.22 92.54 277.62 0.00 0.00 15227.66 276.97 115343.36 00:26:25.212 =================================================================================================================== 00:26:25.212 Total : 92.54 277.62 0.00 0.00 15227.66 276.97 115343.36 00:26:25.212 [2024-07-23 15:20:20.400801] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.212 [2024-07-23 15:20:20.400958] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:25.212 [2024-07-23 15:20:20.401112] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr0 00:26:25.212 ee all in destruct 00:26:25.212 [2024-07-23 15:20:20.401418] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:26:25.212 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.212 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:26:25.471 /dev/nbd0 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:25.471 1+0 records in 00:26:25.471 1+0 records out 00:26:25.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272271 s, 15.0 MB/s 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:25.471 15:20:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:25.730 /dev/nbd1 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:25.730 1+0 records in 00:26:25.730 1+0 records out 00:26:25.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369685 s, 11.1 MB/s 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:25.730 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:25.988 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:25.988 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:25.988 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:25.988 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:25.988 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:25.988 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:25.988 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:26.247 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:26.505 /dev/nbd1 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:26.505 1+0 records in 00:26:26.505 1+0 records out 00:26:26.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549247 s, 7.5 MB/s 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:26.505 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:26.506 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:26.506 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:26.506 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:26.506 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:26.506 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:26.506 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:26.506 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:26.506 15:20:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:26.764 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:27.023 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:27.280 [2024-07-23 15:20:22.543901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:27.280 [2024-07-23 15:20:22.543980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.280 [2024-07-23 15:20:22.544016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:26:27.280 [2024-07-23 15:20:22.544029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.281 [2024-07-23 15:20:22.546915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.281 [2024-07-23 15:20:22.547047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:27.281 [2024-07-23 15:20:22.547218] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:27.281 [2024-07-23 15:20:22.547307] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:27.281 [2024-07-23 15:20:22.547686] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:27.281 [2024-07-23 15:20:22.547891] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:27.281 spare 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.281 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.281 [2024-07-23 15:20:22.648138] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ae80 00:26:27.281 [2024-07-23 15:20:22.648312] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:27.281 [2024-07-23 15:20:22.648570] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000333a0 00:26:27.281 [2024-07-23 15:20:22.649030] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ae80 00:26:27.281 [2024-07-23 15:20:22.649147] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ae80 00:26:27.281 [2024-07-23 15:20:22.649370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:27.539 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:27.539 "name": "raid_bdev1", 00:26:27.539 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:27.539 "strip_size_kb": 0, 00:26:27.539 "state": "online", 00:26:27.539 "raid_level": "raid1", 00:26:27.539 "superblock": true, 00:26:27.539 "num_base_bdevs": 4, 00:26:27.539 "num_base_bdevs_discovered": 3, 00:26:27.539 "num_base_bdevs_operational": 3, 00:26:27.539 "base_bdevs_list": [ 00:26:27.539 { 00:26:27.539 "name": "spare", 00:26:27.539 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:27.539 "is_configured": true, 00:26:27.539 "data_offset": 2048, 00:26:27.539 "data_size": 63488 00:26:27.539 }, 00:26:27.539 { 00:26:27.539 "name": null, 00:26:27.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.539 "is_configured": false, 00:26:27.539 "data_offset": 2048, 00:26:27.539 "data_size": 63488 00:26:27.539 }, 00:26:27.539 { 00:26:27.539 "name": "BaseBdev3", 00:26:27.539 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:27.539 "is_configured": true, 00:26:27.539 "data_offset": 2048, 00:26:27.539 "data_size": 63488 00:26:27.539 }, 00:26:27.539 { 00:26:27.539 "name": "BaseBdev4", 00:26:27.539 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:27.539 "is_configured": true, 00:26:27.539 "data_offset": 2048, 00:26:27.539 "data_size": 63488 00:26:27.539 } 00:26:27.539 ] 00:26:27.539 }' 00:26:27.539 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:27.539 15:20:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:27.796 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:27.796 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.796 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:27.796 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:27.796 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.796 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.796 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.056 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:28.056 "name": "raid_bdev1", 00:26:28.056 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:28.056 "strip_size_kb": 0, 00:26:28.056 "state": "online", 00:26:28.056 "raid_level": "raid1", 00:26:28.056 "superblock": true, 00:26:28.056 "num_base_bdevs": 4, 00:26:28.056 "num_base_bdevs_discovered": 3, 00:26:28.056 "num_base_bdevs_operational": 3, 00:26:28.056 "base_bdevs_list": [ 00:26:28.056 { 00:26:28.056 "name": "spare", 00:26:28.056 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:28.056 "is_configured": true, 00:26:28.056 "data_offset": 2048, 00:26:28.056 "data_size": 63488 00:26:28.056 }, 00:26:28.056 { 00:26:28.056 "name": null, 00:26:28.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.056 "is_configured": false, 00:26:28.056 "data_offset": 2048, 00:26:28.056 "data_size": 63488 00:26:28.056 }, 00:26:28.056 { 00:26:28.056 "name": "BaseBdev3", 00:26:28.056 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:28.056 "is_configured": true, 00:26:28.056 "data_offset": 2048, 00:26:28.056 "data_size": 63488 00:26:28.056 }, 00:26:28.056 { 00:26:28.056 "name": "BaseBdev4", 00:26:28.056 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:28.056 "is_configured": true, 00:26:28.056 "data_offset": 2048, 00:26:28.056 "data_size": 63488 00:26:28.056 } 00:26:28.056 ] 00:26:28.056 }' 00:26:28.056 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:28.056 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:28.056 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:28.056 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:28.056 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:28.056 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.314 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:26:28.314 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:28.314 [2024-07-23 15:20:23.732468] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.572 "name": "raid_bdev1", 00:26:28.572 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:28.572 "strip_size_kb": 0, 00:26:28.572 "state": "online", 00:26:28.572 "raid_level": "raid1", 00:26:28.572 "superblock": true, 00:26:28.572 "num_base_bdevs": 4, 00:26:28.572 "num_base_bdevs_discovered": 2, 00:26:28.572 "num_base_bdevs_operational": 2, 00:26:28.572 "base_bdevs_list": [ 00:26:28.572 { 00:26:28.572 "name": null, 00:26:28.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.572 "is_configured": false, 00:26:28.572 "data_offset": 2048, 00:26:28.572 "data_size": 63488 00:26:28.572 }, 00:26:28.572 { 00:26:28.572 "name": null, 00:26:28.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.572 "is_configured": false, 00:26:28.572 "data_offset": 2048, 00:26:28.572 "data_size": 63488 00:26:28.572 }, 00:26:28.572 { 00:26:28.572 "name": "BaseBdev3", 00:26:28.572 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:28.572 "is_configured": true, 00:26:28.572 "data_offset": 2048, 00:26:28.572 "data_size": 63488 00:26:28.572 }, 00:26:28.572 { 00:26:28.572 "name": "BaseBdev4", 00:26:28.572 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:28.572 "is_configured": true, 00:26:28.572 "data_offset": 2048, 00:26:28.572 "data_size": 63488 00:26:28.572 } 00:26:28.572 ] 00:26:28.572 }' 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.572 15:20:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:29.138 15:20:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:29.138 [2024-07-23 15:20:24.512755] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:29.138 [2024-07-23 15:20:24.513137] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:29.138 [2024-07-23 15:20:24.513286] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:29.138 [2024-07-23 15:20:24.513439] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:29.138 [2024-07-23 15:20:24.517516] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000033470 00:26:29.138 [2024-07-23 15:20:24.519856] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:29.138 15:20:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:30.540 "name": "raid_bdev1", 00:26:30.540 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:30.540 "strip_size_kb": 0, 00:26:30.540 "state": "online", 00:26:30.540 "raid_level": "raid1", 00:26:30.540 "superblock": true, 00:26:30.540 "num_base_bdevs": 4, 00:26:30.540 "num_base_bdevs_discovered": 3, 00:26:30.540 "num_base_bdevs_operational": 3, 00:26:30.540 "process": { 00:26:30.540 "type": "rebuild", 00:26:30.540 "target": "spare", 00:26:30.540 "progress": { 00:26:30.540 "blocks": 22528, 00:26:30.540 "percent": 35 00:26:30.540 } 00:26:30.540 }, 00:26:30.540 "base_bdevs_list": [ 00:26:30.540 { 00:26:30.540 "name": "spare", 00:26:30.540 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:30.540 "is_configured": true, 00:26:30.540 "data_offset": 2048, 00:26:30.540 "data_size": 63488 00:26:30.540 }, 00:26:30.540 { 00:26:30.540 "name": null, 00:26:30.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.540 "is_configured": false, 00:26:30.540 "data_offset": 2048, 00:26:30.540 "data_size": 63488 00:26:30.540 }, 00:26:30.540 { 00:26:30.540 "name": "BaseBdev3", 00:26:30.540 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:30.540 "is_configured": true, 00:26:30.540 "data_offset": 2048, 00:26:30.540 "data_size": 63488 00:26:30.540 }, 00:26:30.540 { 00:26:30.540 "name": "BaseBdev4", 00:26:30.540 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:30.540 "is_configured": true, 00:26:30.540 "data_offset": 2048, 00:26:30.540 "data_size": 63488 00:26:30.540 } 00:26:30.540 ] 00:26:30.540 }' 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:30.540 [2024-07-23 15:20:25.906296] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:30.540 [2024-07-23 15:20:25.927905] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:30.540 [2024-07-23 15:20:25.927975] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:30.540 [2024-07-23 15:20:25.927996] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:30.540 [2024-07-23 15:20:25.928005] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.540 15:20:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.799 15:20:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.799 "name": "raid_bdev1", 00:26:30.799 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:30.799 "strip_size_kb": 0, 00:26:30.799 "state": "online", 00:26:30.799 "raid_level": "raid1", 00:26:30.799 "superblock": true, 00:26:30.799 "num_base_bdevs": 4, 00:26:30.799 "num_base_bdevs_discovered": 2, 00:26:30.799 "num_base_bdevs_operational": 2, 00:26:30.799 "base_bdevs_list": [ 00:26:30.799 { 00:26:30.799 "name": null, 00:26:30.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.799 "is_configured": false, 00:26:30.799 "data_offset": 2048, 00:26:30.799 "data_size": 63488 00:26:30.799 }, 00:26:30.799 { 00:26:30.799 "name": null, 00:26:30.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.799 "is_configured": false, 00:26:30.799 "data_offset": 2048, 00:26:30.799 "data_size": 63488 00:26:30.799 }, 00:26:30.799 { 00:26:30.799 "name": "BaseBdev3", 00:26:30.799 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:30.799 "is_configured": true, 00:26:30.799 "data_offset": 2048, 00:26:30.799 "data_size": 63488 00:26:30.799 }, 00:26:30.799 { 00:26:30.799 "name": "BaseBdev4", 00:26:30.799 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:30.799 "is_configured": true, 00:26:30.799 "data_offset": 2048, 00:26:30.799 "data_size": 63488 00:26:30.799 } 00:26:30.799 ] 00:26:30.799 }' 00:26:30.799 15:20:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.799 15:20:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:31.365 15:20:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:31.365 [2024-07-23 15:20:26.733157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:31.365 [2024-07-23 15:20:26.733422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.365 [2024-07-23 15:20:26.733468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:26:31.365 [2024-07-23 15:20:26.733481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.365 [2024-07-23 15:20:26.733988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.365 [2024-07-23 15:20:26.734013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:31.365 [2024-07-23 15:20:26.734107] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:31.365 [2024-07-23 15:20:26.734122] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:31.365 [2024-07-23 15:20:26.734138] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:31.365 [2024-07-23 15:20:26.734180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:31.365 spare 00:26:31.365 [2024-07-23 15:20:26.738065] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000033540 00:26:31.365 [2024-07-23 15:20:26.740325] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:31.365 15:20:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:26:32.738 15:20:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:32.738 15:20:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.738 15:20:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:32.738 15:20:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:32.738 15:20:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.738 15:20:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.738 15:20:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.738 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.738 "name": "raid_bdev1", 00:26:32.738 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:32.738 "strip_size_kb": 0, 00:26:32.738 "state": "online", 00:26:32.738 "raid_level": "raid1", 00:26:32.738 "superblock": true, 00:26:32.738 "num_base_bdevs": 4, 00:26:32.738 "num_base_bdevs_discovered": 3, 00:26:32.738 "num_base_bdevs_operational": 3, 00:26:32.738 "process": { 00:26:32.738 "type": "rebuild", 00:26:32.738 "target": "spare", 00:26:32.738 "progress": { 00:26:32.738 "blocks": 24576, 00:26:32.738 "percent": 38 00:26:32.738 } 00:26:32.738 }, 00:26:32.738 "base_bdevs_list": [ 00:26:32.738 { 00:26:32.738 "name": "spare", 00:26:32.738 "uuid": "0176d8f0-e8a1-5576-84a9-0d29b3d9a17c", 00:26:32.738 "is_configured": true, 00:26:32.738 "data_offset": 2048, 00:26:32.738 "data_size": 63488 00:26:32.738 }, 00:26:32.738 { 00:26:32.738 "name": null, 00:26:32.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.738 "is_configured": false, 00:26:32.738 "data_offset": 2048, 00:26:32.738 "data_size": 63488 00:26:32.738 }, 00:26:32.738 { 00:26:32.738 "name": "BaseBdev3", 00:26:32.738 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:32.738 "is_configured": true, 00:26:32.738 "data_offset": 2048, 00:26:32.738 "data_size": 63488 00:26:32.738 }, 00:26:32.738 { 00:26:32.738 "name": "BaseBdev4", 00:26:32.738 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:32.738 "is_configured": true, 00:26:32.738 "data_offset": 2048, 00:26:32.738 "data_size": 63488 00:26:32.738 } 00:26:32.738 ] 00:26:32.738 }' 00:26:32.738 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.738 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:32.738 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.738 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:32.738 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:32.996 [2024-07-23 15:20:28.206617] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:32.996 [2024-07-23 15:20:28.249210] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:32.996 [2024-07-23 15:20:28.249283] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.996 [2024-07-23 15:20:28.249302] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:32.996 [2024-07-23 15:20:28.249313] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.996 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.253 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.253 "name": "raid_bdev1", 00:26:33.253 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:33.253 "strip_size_kb": 0, 00:26:33.253 "state": "online", 00:26:33.253 "raid_level": "raid1", 00:26:33.253 "superblock": true, 00:26:33.253 "num_base_bdevs": 4, 00:26:33.253 "num_base_bdevs_discovered": 2, 00:26:33.253 "num_base_bdevs_operational": 2, 00:26:33.253 "base_bdevs_list": [ 00:26:33.253 { 00:26:33.253 "name": null, 00:26:33.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.253 "is_configured": false, 00:26:33.253 "data_offset": 2048, 00:26:33.253 "data_size": 63488 00:26:33.253 }, 00:26:33.253 { 00:26:33.253 "name": null, 00:26:33.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.253 "is_configured": false, 00:26:33.253 "data_offset": 2048, 00:26:33.253 "data_size": 63488 00:26:33.253 }, 00:26:33.253 { 00:26:33.253 "name": "BaseBdev3", 00:26:33.253 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:33.253 "is_configured": true, 00:26:33.253 "data_offset": 2048, 00:26:33.253 "data_size": 63488 00:26:33.253 }, 00:26:33.253 { 00:26:33.253 "name": "BaseBdev4", 00:26:33.253 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:33.253 "is_configured": true, 00:26:33.253 "data_offset": 2048, 00:26:33.253 "data_size": 63488 00:26:33.253 } 00:26:33.253 ] 00:26:33.253 }' 00:26:33.253 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.253 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:33.511 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:33.511 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:33.511 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:33.511 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:33.511 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:33.511 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.511 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.769 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:33.769 "name": "raid_bdev1", 00:26:33.769 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:33.769 "strip_size_kb": 0, 00:26:33.769 "state": "online", 00:26:33.769 "raid_level": "raid1", 00:26:33.769 "superblock": true, 00:26:33.769 "num_base_bdevs": 4, 00:26:33.769 "num_base_bdevs_discovered": 2, 00:26:33.769 "num_base_bdevs_operational": 2, 00:26:33.769 "base_bdevs_list": [ 00:26:33.769 { 00:26:33.769 "name": null, 00:26:33.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.769 "is_configured": false, 00:26:33.769 "data_offset": 2048, 00:26:33.769 "data_size": 63488 00:26:33.769 }, 00:26:33.769 { 00:26:33.769 "name": null, 00:26:33.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.769 "is_configured": false, 00:26:33.769 "data_offset": 2048, 00:26:33.769 "data_size": 63488 00:26:33.769 }, 00:26:33.769 { 00:26:33.769 "name": "BaseBdev3", 00:26:33.769 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:33.769 "is_configured": true, 00:26:33.769 "data_offset": 2048, 00:26:33.769 "data_size": 63488 00:26:33.769 }, 00:26:33.769 { 00:26:33.769 "name": "BaseBdev4", 00:26:33.769 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:33.769 "is_configured": true, 00:26:33.769 "data_offset": 2048, 00:26:33.769 "data_size": 63488 00:26:33.769 } 00:26:33.769 ] 00:26:33.769 }' 00:26:33.769 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:33.769 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:33.769 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:33.769 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:33.769 15:20:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:34.026 15:20:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:34.282 [2024-07-23 15:20:29.458359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:34.282 [2024-07-23 15:20:29.458447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.282 [2024-07-23 15:20:29.458476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:26:34.282 [2024-07-23 15:20:29.458492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.282 [2024-07-23 15:20:29.458994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.282 [2024-07-23 15:20:29.459025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:34.282 [2024-07-23 15:20:29.459100] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:34.282 [2024-07-23 15:20:29.459123] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:34.283 [2024-07-23 15:20:29.459133] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:34.283 BaseBdev1 00:26:34.283 15:20:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.213 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.469 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:35.469 "name": "raid_bdev1", 00:26:35.469 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:35.469 "strip_size_kb": 0, 00:26:35.469 "state": "online", 00:26:35.469 "raid_level": "raid1", 00:26:35.469 "superblock": true, 00:26:35.469 "num_base_bdevs": 4, 00:26:35.469 "num_base_bdevs_discovered": 2, 00:26:35.469 "num_base_bdevs_operational": 2, 00:26:35.469 "base_bdevs_list": [ 00:26:35.469 { 00:26:35.469 "name": null, 00:26:35.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.469 "is_configured": false, 00:26:35.469 "data_offset": 2048, 00:26:35.469 "data_size": 63488 00:26:35.469 }, 00:26:35.469 { 00:26:35.469 "name": null, 00:26:35.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.469 "is_configured": false, 00:26:35.469 "data_offset": 2048, 00:26:35.469 "data_size": 63488 00:26:35.469 }, 00:26:35.469 { 00:26:35.469 "name": "BaseBdev3", 00:26:35.469 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:35.469 "is_configured": true, 00:26:35.469 "data_offset": 2048, 00:26:35.469 "data_size": 63488 00:26:35.469 }, 00:26:35.469 { 00:26:35.469 "name": "BaseBdev4", 00:26:35.469 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:35.469 "is_configured": true, 00:26:35.469 "data_offset": 2048, 00:26:35.469 "data_size": 63488 00:26:35.469 } 00:26:35.469 ] 00:26:35.469 }' 00:26:35.469 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:35.469 15:20:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:35.725 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:35.725 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.725 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:35.725 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:35.725 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.725 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.725 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.982 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:35.982 "name": "raid_bdev1", 00:26:35.982 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:35.982 "strip_size_kb": 0, 00:26:35.982 "state": "online", 00:26:35.982 "raid_level": "raid1", 00:26:35.982 "superblock": true, 00:26:35.982 "num_base_bdevs": 4, 00:26:35.982 "num_base_bdevs_discovered": 2, 00:26:35.982 "num_base_bdevs_operational": 2, 00:26:35.982 "base_bdevs_list": [ 00:26:35.982 { 00:26:35.982 "name": null, 00:26:35.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.982 "is_configured": false, 00:26:35.982 "data_offset": 2048, 00:26:35.982 "data_size": 63488 00:26:35.982 }, 00:26:35.983 { 00:26:35.983 "name": null, 00:26:35.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.983 "is_configured": false, 00:26:35.983 "data_offset": 2048, 00:26:35.983 "data_size": 63488 00:26:35.983 }, 00:26:35.983 { 00:26:35.983 "name": "BaseBdev3", 00:26:35.983 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:35.983 "is_configured": true, 00:26:35.983 "data_offset": 2048, 00:26:35.983 "data_size": 63488 00:26:35.983 }, 00:26:35.983 { 00:26:35.983 "name": "BaseBdev4", 00:26:35.983 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:35.983 "is_configured": true, 00:26:35.983 "data_offset": 2048, 00:26:35.983 "data_size": 63488 00:26:35.983 } 00:26:35.983 ] 00:26:35.983 }' 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:35.983 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:36.240 [2024-07-23 15:20:31.583101] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:36.240 [2024-07-23 15:20:31.583486] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:36.240 [2024-07-23 15:20:31.583638] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:36.240 request: 00:26:36.240 { 00:26:36.240 "base_bdev": "BaseBdev1", 00:26:36.240 "raid_bdev": "raid_bdev1", 00:26:36.240 "method": "bdev_raid_add_base_bdev", 00:26:36.240 "req_id": 1 00:26:36.240 } 00:26:36.240 Got JSON-RPC error response 00:26:36.240 response: 00:26:36.240 { 00:26:36.240 "code": -22, 00:26:36.240 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:36.240 } 00:26:36.240 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:26:36.240 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:36.240 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:36.240 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:36.240 15:20:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.203 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.461 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.461 "name": "raid_bdev1", 00:26:37.461 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:37.461 "strip_size_kb": 0, 00:26:37.461 "state": "online", 00:26:37.461 "raid_level": "raid1", 00:26:37.461 "superblock": true, 00:26:37.461 "num_base_bdevs": 4, 00:26:37.461 "num_base_bdevs_discovered": 2, 00:26:37.461 "num_base_bdevs_operational": 2, 00:26:37.461 "base_bdevs_list": [ 00:26:37.461 { 00:26:37.461 "name": null, 00:26:37.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.461 "is_configured": false, 00:26:37.461 "data_offset": 2048, 00:26:37.461 "data_size": 63488 00:26:37.461 }, 00:26:37.461 { 00:26:37.461 "name": null, 00:26:37.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.461 "is_configured": false, 00:26:37.461 "data_offset": 2048, 00:26:37.461 "data_size": 63488 00:26:37.461 }, 00:26:37.461 { 00:26:37.461 "name": "BaseBdev3", 00:26:37.461 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:37.461 "is_configured": true, 00:26:37.461 "data_offset": 2048, 00:26:37.461 "data_size": 63488 00:26:37.461 }, 00:26:37.461 { 00:26:37.462 "name": "BaseBdev4", 00:26:37.462 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:37.462 "is_configured": true, 00:26:37.462 "data_offset": 2048, 00:26:37.462 "data_size": 63488 00:26:37.462 } 00:26:37.462 ] 00:26:37.462 }' 00:26:37.462 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.462 15:20:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:38.028 "name": "raid_bdev1", 00:26:38.028 "uuid": "6a7505a9-16f8-4646-b9d2-8dfd52ae9a91", 00:26:38.028 "strip_size_kb": 0, 00:26:38.028 "state": "online", 00:26:38.028 "raid_level": "raid1", 00:26:38.028 "superblock": true, 00:26:38.028 "num_base_bdevs": 4, 00:26:38.028 "num_base_bdevs_discovered": 2, 00:26:38.028 "num_base_bdevs_operational": 2, 00:26:38.028 "base_bdevs_list": [ 00:26:38.028 { 00:26:38.028 "name": null, 00:26:38.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.028 "is_configured": false, 00:26:38.028 "data_offset": 2048, 00:26:38.028 "data_size": 63488 00:26:38.028 }, 00:26:38.028 { 00:26:38.028 "name": null, 00:26:38.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.028 "is_configured": false, 00:26:38.028 "data_offset": 2048, 00:26:38.028 "data_size": 63488 00:26:38.028 }, 00:26:38.028 { 00:26:38.028 "name": "BaseBdev3", 00:26:38.028 "uuid": "b5a2f9cb-145f-554b-ad7a-5e0d63beaec7", 00:26:38.028 "is_configured": true, 00:26:38.028 "data_offset": 2048, 00:26:38.028 "data_size": 63488 00:26:38.028 }, 00:26:38.028 { 00:26:38.028 "name": "BaseBdev4", 00:26:38.028 "uuid": "757a3086-380a-5412-b15e-90d745686d1d", 00:26:38.028 "is_configured": true, 00:26:38.028 "data_offset": 2048, 00:26:38.028 "data_size": 63488 00:26:38.028 } 00:26:38.028 ] 00:26:38.028 }' 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:38.028 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 111676 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 111676 ']' 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 111676 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111676 00:26:38.029 killing process with pid 111676 00:26:38.029 Received shutdown signal, test time was about 24.238521 seconds 00:26:38.029 00:26:38.029 Latency(us) 00:26:38.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.029 =================================================================================================================== 00:26:38.029 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111676' 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 111676 00:26:38.029 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 111676 00:26:38.029 [2024-07-23 15:20:33.418471] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:38.029 [2024-07-23 15:20:33.418617] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:38.029 [2024-07-23 15:20:33.418701] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:38.029 [2024-07-23 15:20:33.418729] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ae80 name raid_bdev1, state offline 00:26:38.286 [2024-07-23 15:20:33.465165] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:38.286 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:26:38.286 00:26:38.286 real 0m28.827s 00:26:38.286 user 0m42.263s 00:26:38.286 sys 0m4.687s 00:26:38.286 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.286 ************************************ 00:26:38.286 END TEST raid_rebuild_test_sb_io 00:26:38.286 ************************************ 00:26:38.286 15:20:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:38.544 15:20:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:38.544 15:20:33 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:26:38.544 15:20:33 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:26:38.544 15:20:33 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:26:38.544 15:20:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:38.544 15:20:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.544 15:20:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:38.544 ************************************ 00:26:38.544 START TEST raid5f_state_function_test 00:26:38.544 ************************************ 00:26:38.544 15:20:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 false 00:26:38.544 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:26:38.544 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:26:38.544 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:26:38.544 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=112518 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:38.545 Process raid pid: 112518 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 112518' 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 112518 /var/tmp/spdk-raid.sock 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 112518 ']' 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:38.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.545 15:20:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.545 [2024-07-23 15:20:33.853249] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:26:38.545 [2024-07-23 15:20:33.853430] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.803 [2024-07-23 15:20:34.007725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.803 [2024-07-23 15:20:34.056264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.803 [2024-07-23 15:20:34.100706] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.370 15:20:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.370 15:20:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:26:39.370 15:20:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:39.628 [2024-07-23 15:20:35.022319] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:39.628 [2024-07-23 15:20:35.022383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:39.628 [2024-07-23 15:20:35.022394] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:39.628 [2024-07-23 15:20:35.022408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:39.628 [2024-07-23 15:20:35.022419] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:39.628 [2024-07-23 15:20:35.022432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.628 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:39.887 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.887 "name": "Existed_Raid", 00:26:39.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.887 "strip_size_kb": 64, 00:26:39.887 "state": "configuring", 00:26:39.887 "raid_level": "raid5f", 00:26:39.887 "superblock": false, 00:26:39.887 "num_base_bdevs": 3, 00:26:39.887 "num_base_bdevs_discovered": 0, 00:26:39.887 "num_base_bdevs_operational": 3, 00:26:39.887 "base_bdevs_list": [ 00:26:39.887 { 00:26:39.887 "name": "BaseBdev1", 00:26:39.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.887 "is_configured": false, 00:26:39.887 "data_offset": 0, 00:26:39.887 "data_size": 0 00:26:39.887 }, 00:26:39.887 { 00:26:39.887 "name": "BaseBdev2", 00:26:39.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.887 "is_configured": false, 00:26:39.887 "data_offset": 0, 00:26:39.887 "data_size": 0 00:26:39.887 }, 00:26:39.887 { 00:26:39.887 "name": "BaseBdev3", 00:26:39.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.887 "is_configured": false, 00:26:39.887 "data_offset": 0, 00:26:39.887 "data_size": 0 00:26:39.887 } 00:26:39.887 ] 00:26:39.887 }' 00:26:39.887 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.887 15:20:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.146 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:40.404 [2024-07-23 15:20:35.630336] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:40.404 [2024-07-23 15:20:35.630396] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:26:40.404 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:40.663 [2024-07-23 15:20:35.866426] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:40.663 [2024-07-23 15:20:35.866507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:40.663 [2024-07-23 15:20:35.866519] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:40.663 [2024-07-23 15:20:35.866534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:40.663 [2024-07-23 15:20:35.866542] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:40.663 [2024-07-23 15:20:35.866556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:40.663 15:20:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:40.663 [2024-07-23 15:20:36.048310] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:40.663 BaseBdev1 00:26:40.663 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:40.663 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:40.663 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:40.663 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:40.663 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:40.663 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:40.663 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:40.921 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:41.180 [ 00:26:41.180 { 00:26:41.180 "name": "BaseBdev1", 00:26:41.180 "aliases": [ 00:26:41.180 "164fe747-12a9-42a2-9f3e-168fc1a552da" 00:26:41.180 ], 00:26:41.180 "product_name": "Malloc disk", 00:26:41.180 "block_size": 512, 00:26:41.180 "num_blocks": 65536, 00:26:41.180 "uuid": "164fe747-12a9-42a2-9f3e-168fc1a552da", 00:26:41.180 "assigned_rate_limits": { 00:26:41.180 "rw_ios_per_sec": 0, 00:26:41.180 "rw_mbytes_per_sec": 0, 00:26:41.180 "r_mbytes_per_sec": 0, 00:26:41.180 "w_mbytes_per_sec": 0 00:26:41.180 }, 00:26:41.180 "claimed": true, 00:26:41.180 "claim_type": "exclusive_write", 00:26:41.180 "zoned": false, 00:26:41.180 "supported_io_types": { 00:26:41.180 "read": true, 00:26:41.180 "write": true, 00:26:41.180 "unmap": true, 00:26:41.180 "flush": true, 00:26:41.180 "reset": true, 00:26:41.180 "nvme_admin": false, 00:26:41.180 "nvme_io": false, 00:26:41.180 "nvme_io_md": false, 00:26:41.180 "write_zeroes": true, 00:26:41.180 "zcopy": true, 00:26:41.180 "get_zone_info": false, 00:26:41.180 "zone_management": false, 00:26:41.180 "zone_append": false, 00:26:41.180 "compare": false, 00:26:41.180 "compare_and_write": false, 00:26:41.180 "abort": true, 00:26:41.180 "seek_hole": false, 00:26:41.180 "seek_data": false, 00:26:41.180 "copy": true, 00:26:41.180 "nvme_iov_md": false 00:26:41.180 }, 00:26:41.180 "memory_domains": [ 00:26:41.180 { 00:26:41.180 "dma_device_id": "system", 00:26:41.180 "dma_device_type": 1 00:26:41.180 }, 00:26:41.180 { 00:26:41.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.180 "dma_device_type": 2 00:26:41.180 } 00:26:41.180 ], 00:26:41.180 "driver_specific": {} 00:26:41.180 } 00:26:41.180 ] 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.180 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.441 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:41.441 "name": "Existed_Raid", 00:26:41.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.441 "strip_size_kb": 64, 00:26:41.441 "state": "configuring", 00:26:41.441 "raid_level": "raid5f", 00:26:41.441 "superblock": false, 00:26:41.441 "num_base_bdevs": 3, 00:26:41.441 "num_base_bdevs_discovered": 1, 00:26:41.441 "num_base_bdevs_operational": 3, 00:26:41.441 "base_bdevs_list": [ 00:26:41.441 { 00:26:41.441 "name": "BaseBdev1", 00:26:41.441 "uuid": "164fe747-12a9-42a2-9f3e-168fc1a552da", 00:26:41.441 "is_configured": true, 00:26:41.441 "data_offset": 0, 00:26:41.441 "data_size": 65536 00:26:41.441 }, 00:26:41.441 { 00:26:41.441 "name": "BaseBdev2", 00:26:41.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.441 "is_configured": false, 00:26:41.441 "data_offset": 0, 00:26:41.441 "data_size": 0 00:26:41.441 }, 00:26:41.441 { 00:26:41.441 "name": "BaseBdev3", 00:26:41.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.441 "is_configured": false, 00:26:41.441 "data_offset": 0, 00:26:41.441 "data_size": 0 00:26:41.441 } 00:26:41.441 ] 00:26:41.441 }' 00:26:41.441 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:41.441 15:20:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.699 15:20:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:41.699 [2024-07-23 15:20:37.088606] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:41.699 [2024-07-23 15:20:37.088682] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:26:41.699 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:41.957 [2024-07-23 15:20:37.268720] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:41.957 [2024-07-23 15:20:37.270993] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:41.957 [2024-07-23 15:20:37.271043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:41.957 [2024-07-23 15:20:37.271054] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:41.957 [2024-07-23 15:20:37.271068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:41.957 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:41.958 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:41.958 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:41.958 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:41.958 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.958 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.216 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.216 "name": "Existed_Raid", 00:26:42.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.216 "strip_size_kb": 64, 00:26:42.216 "state": "configuring", 00:26:42.216 "raid_level": "raid5f", 00:26:42.216 "superblock": false, 00:26:42.216 "num_base_bdevs": 3, 00:26:42.216 "num_base_bdevs_discovered": 1, 00:26:42.216 "num_base_bdevs_operational": 3, 00:26:42.216 "base_bdevs_list": [ 00:26:42.216 { 00:26:42.216 "name": "BaseBdev1", 00:26:42.216 "uuid": "164fe747-12a9-42a2-9f3e-168fc1a552da", 00:26:42.216 "is_configured": true, 00:26:42.216 "data_offset": 0, 00:26:42.216 "data_size": 65536 00:26:42.216 }, 00:26:42.216 { 00:26:42.216 "name": "BaseBdev2", 00:26:42.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.216 "is_configured": false, 00:26:42.216 "data_offset": 0, 00:26:42.216 "data_size": 0 00:26:42.216 }, 00:26:42.216 { 00:26:42.216 "name": "BaseBdev3", 00:26:42.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.216 "is_configured": false, 00:26:42.216 "data_offset": 0, 00:26:42.216 "data_size": 0 00:26:42.216 } 00:26:42.216 ] 00:26:42.216 }' 00:26:42.216 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.216 15:20:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.475 15:20:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:42.734 [2024-07-23 15:20:38.019377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:42.734 BaseBdev2 00:26:42.734 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:42.734 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:42.734 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:42.734 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:42.734 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:42.734 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:42.734 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:42.992 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:43.251 [ 00:26:43.251 { 00:26:43.251 "name": "BaseBdev2", 00:26:43.251 "aliases": [ 00:26:43.251 "d0d9eff0-d686-4aa2-b16c-c329bf1541b2" 00:26:43.251 ], 00:26:43.251 "product_name": "Malloc disk", 00:26:43.251 "block_size": 512, 00:26:43.251 "num_blocks": 65536, 00:26:43.251 "uuid": "d0d9eff0-d686-4aa2-b16c-c329bf1541b2", 00:26:43.251 "assigned_rate_limits": { 00:26:43.251 "rw_ios_per_sec": 0, 00:26:43.251 "rw_mbytes_per_sec": 0, 00:26:43.251 "r_mbytes_per_sec": 0, 00:26:43.251 "w_mbytes_per_sec": 0 00:26:43.251 }, 00:26:43.251 "claimed": true, 00:26:43.251 "claim_type": "exclusive_write", 00:26:43.251 "zoned": false, 00:26:43.251 "supported_io_types": { 00:26:43.251 "read": true, 00:26:43.251 "write": true, 00:26:43.251 "unmap": true, 00:26:43.251 "flush": true, 00:26:43.251 "reset": true, 00:26:43.251 "nvme_admin": false, 00:26:43.251 "nvme_io": false, 00:26:43.251 "nvme_io_md": false, 00:26:43.251 "write_zeroes": true, 00:26:43.251 "zcopy": true, 00:26:43.251 "get_zone_info": false, 00:26:43.251 "zone_management": false, 00:26:43.251 "zone_append": false, 00:26:43.251 "compare": false, 00:26:43.251 "compare_and_write": false, 00:26:43.251 "abort": true, 00:26:43.251 "seek_hole": false, 00:26:43.251 "seek_data": false, 00:26:43.251 "copy": true, 00:26:43.251 "nvme_iov_md": false 00:26:43.251 }, 00:26:43.251 "memory_domains": [ 00:26:43.251 { 00:26:43.251 "dma_device_id": "system", 00:26:43.251 "dma_device_type": 1 00:26:43.251 }, 00:26:43.251 { 00:26:43.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.251 "dma_device_type": 2 00:26:43.251 } 00:26:43.251 ], 00:26:43.251 "driver_specific": {} 00:26:43.251 } 00:26:43.251 ] 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.251 "name": "Existed_Raid", 00:26:43.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.251 "strip_size_kb": 64, 00:26:43.251 "state": "configuring", 00:26:43.251 "raid_level": "raid5f", 00:26:43.251 "superblock": false, 00:26:43.251 "num_base_bdevs": 3, 00:26:43.251 "num_base_bdevs_discovered": 2, 00:26:43.251 "num_base_bdevs_operational": 3, 00:26:43.251 "base_bdevs_list": [ 00:26:43.251 { 00:26:43.251 "name": "BaseBdev1", 00:26:43.251 "uuid": "164fe747-12a9-42a2-9f3e-168fc1a552da", 00:26:43.251 "is_configured": true, 00:26:43.251 "data_offset": 0, 00:26:43.251 "data_size": 65536 00:26:43.251 }, 00:26:43.251 { 00:26:43.251 "name": "BaseBdev2", 00:26:43.251 "uuid": "d0d9eff0-d686-4aa2-b16c-c329bf1541b2", 00:26:43.251 "is_configured": true, 00:26:43.251 "data_offset": 0, 00:26:43.251 "data_size": 65536 00:26:43.251 }, 00:26:43.251 { 00:26:43.251 "name": "BaseBdev3", 00:26:43.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.251 "is_configured": false, 00:26:43.251 "data_offset": 0, 00:26:43.251 "data_size": 0 00:26:43.251 } 00:26:43.251 ] 00:26:43.251 }' 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.251 15:20:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.827 15:20:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:43.827 [2024-07-23 15:20:39.234988] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:43.827 [2024-07-23 15:20:39.235075] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:26:43.827 [2024-07-23 15:20:39.235090] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:43.827 [2024-07-23 15:20:39.235188] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:26:43.827 [2024-07-23 15:20:39.235883] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:26:43.827 [2024-07-23 15:20:39.235905] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:26:43.827 [2024-07-23 15:20:39.236115] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:43.827 BaseBdev3 00:26:43.827 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:43.827 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:43.827 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:43.827 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:43.827 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:43.827 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:43.827 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:44.392 [ 00:26:44.392 { 00:26:44.392 "name": "BaseBdev3", 00:26:44.392 "aliases": [ 00:26:44.392 "60827d8b-a087-4546-896d-d2bcaf4f4d3a" 00:26:44.392 ], 00:26:44.392 "product_name": "Malloc disk", 00:26:44.392 "block_size": 512, 00:26:44.392 "num_blocks": 65536, 00:26:44.392 "uuid": "60827d8b-a087-4546-896d-d2bcaf4f4d3a", 00:26:44.392 "assigned_rate_limits": { 00:26:44.392 "rw_ios_per_sec": 0, 00:26:44.392 "rw_mbytes_per_sec": 0, 00:26:44.392 "r_mbytes_per_sec": 0, 00:26:44.392 "w_mbytes_per_sec": 0 00:26:44.392 }, 00:26:44.392 "claimed": true, 00:26:44.392 "claim_type": "exclusive_write", 00:26:44.392 "zoned": false, 00:26:44.392 "supported_io_types": { 00:26:44.392 "read": true, 00:26:44.392 "write": true, 00:26:44.392 "unmap": true, 00:26:44.392 "flush": true, 00:26:44.392 "reset": true, 00:26:44.392 "nvme_admin": false, 00:26:44.392 "nvme_io": false, 00:26:44.392 "nvme_io_md": false, 00:26:44.392 "write_zeroes": true, 00:26:44.392 "zcopy": true, 00:26:44.392 "get_zone_info": false, 00:26:44.392 "zone_management": false, 00:26:44.392 "zone_append": false, 00:26:44.392 "compare": false, 00:26:44.392 "compare_and_write": false, 00:26:44.392 "abort": true, 00:26:44.392 "seek_hole": false, 00:26:44.392 "seek_data": false, 00:26:44.392 "copy": true, 00:26:44.392 "nvme_iov_md": false 00:26:44.392 }, 00:26:44.392 "memory_domains": [ 00:26:44.392 { 00:26:44.392 "dma_device_id": "system", 00:26:44.392 "dma_device_type": 1 00:26:44.392 }, 00:26:44.392 { 00:26:44.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.392 "dma_device_type": 2 00:26:44.392 } 00:26:44.392 ], 00:26:44.392 "driver_specific": {} 00:26:44.392 } 00:26:44.392 ] 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:44.392 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.650 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.650 "name": "Existed_Raid", 00:26:44.650 "uuid": "48e05571-170f-492e-8819-fc7264d06704", 00:26:44.650 "strip_size_kb": 64, 00:26:44.650 "state": "online", 00:26:44.650 "raid_level": "raid5f", 00:26:44.650 "superblock": false, 00:26:44.650 "num_base_bdevs": 3, 00:26:44.650 "num_base_bdevs_discovered": 3, 00:26:44.650 "num_base_bdevs_operational": 3, 00:26:44.650 "base_bdevs_list": [ 00:26:44.650 { 00:26:44.650 "name": "BaseBdev1", 00:26:44.650 "uuid": "164fe747-12a9-42a2-9f3e-168fc1a552da", 00:26:44.650 "is_configured": true, 00:26:44.650 "data_offset": 0, 00:26:44.650 "data_size": 65536 00:26:44.650 }, 00:26:44.650 { 00:26:44.650 "name": "BaseBdev2", 00:26:44.650 "uuid": "d0d9eff0-d686-4aa2-b16c-c329bf1541b2", 00:26:44.650 "is_configured": true, 00:26:44.650 "data_offset": 0, 00:26:44.650 "data_size": 65536 00:26:44.650 }, 00:26:44.650 { 00:26:44.650 "name": "BaseBdev3", 00:26:44.650 "uuid": "60827d8b-a087-4546-896d-d2bcaf4f4d3a", 00:26:44.650 "is_configured": true, 00:26:44.650 "data_offset": 0, 00:26:44.650 "data_size": 65536 00:26:44.650 } 00:26:44.650 ] 00:26:44.650 }' 00:26:44.650 15:20:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.650 15:20:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.908 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:44.908 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:44.908 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:44.908 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:44.908 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:44.908 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:44.908 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:44.908 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:45.165 [2024-07-23 15:20:40.475570] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:45.165 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:45.165 "name": "Existed_Raid", 00:26:45.165 "aliases": [ 00:26:45.165 "48e05571-170f-492e-8819-fc7264d06704" 00:26:45.165 ], 00:26:45.165 "product_name": "Raid Volume", 00:26:45.165 "block_size": 512, 00:26:45.165 "num_blocks": 131072, 00:26:45.165 "uuid": "48e05571-170f-492e-8819-fc7264d06704", 00:26:45.165 "assigned_rate_limits": { 00:26:45.165 "rw_ios_per_sec": 0, 00:26:45.165 "rw_mbytes_per_sec": 0, 00:26:45.165 "r_mbytes_per_sec": 0, 00:26:45.165 "w_mbytes_per_sec": 0 00:26:45.165 }, 00:26:45.165 "claimed": false, 00:26:45.165 "zoned": false, 00:26:45.165 "supported_io_types": { 00:26:45.165 "read": true, 00:26:45.165 "write": true, 00:26:45.165 "unmap": false, 00:26:45.165 "flush": false, 00:26:45.165 "reset": true, 00:26:45.165 "nvme_admin": false, 00:26:45.165 "nvme_io": false, 00:26:45.165 "nvme_io_md": false, 00:26:45.165 "write_zeroes": true, 00:26:45.165 "zcopy": false, 00:26:45.165 "get_zone_info": false, 00:26:45.165 "zone_management": false, 00:26:45.165 "zone_append": false, 00:26:45.165 "compare": false, 00:26:45.165 "compare_and_write": false, 00:26:45.165 "abort": false, 00:26:45.165 "seek_hole": false, 00:26:45.165 "seek_data": false, 00:26:45.165 "copy": false, 00:26:45.165 "nvme_iov_md": false 00:26:45.165 }, 00:26:45.165 "driver_specific": { 00:26:45.165 "raid": { 00:26:45.165 "uuid": "48e05571-170f-492e-8819-fc7264d06704", 00:26:45.165 "strip_size_kb": 64, 00:26:45.165 "state": "online", 00:26:45.165 "raid_level": "raid5f", 00:26:45.165 "superblock": false, 00:26:45.166 "num_base_bdevs": 3, 00:26:45.166 "num_base_bdevs_discovered": 3, 00:26:45.166 "num_base_bdevs_operational": 3, 00:26:45.166 "base_bdevs_list": [ 00:26:45.166 { 00:26:45.166 "name": "BaseBdev1", 00:26:45.166 "uuid": "164fe747-12a9-42a2-9f3e-168fc1a552da", 00:26:45.166 "is_configured": true, 00:26:45.166 "data_offset": 0, 00:26:45.166 "data_size": 65536 00:26:45.166 }, 00:26:45.166 { 00:26:45.166 "name": "BaseBdev2", 00:26:45.166 "uuid": "d0d9eff0-d686-4aa2-b16c-c329bf1541b2", 00:26:45.166 "is_configured": true, 00:26:45.166 "data_offset": 0, 00:26:45.166 "data_size": 65536 00:26:45.166 }, 00:26:45.166 { 00:26:45.166 "name": "BaseBdev3", 00:26:45.166 "uuid": "60827d8b-a087-4546-896d-d2bcaf4f4d3a", 00:26:45.166 "is_configured": true, 00:26:45.166 "data_offset": 0, 00:26:45.166 "data_size": 65536 00:26:45.166 } 00:26:45.166 ] 00:26:45.166 } 00:26:45.166 } 00:26:45.166 }' 00:26:45.166 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:45.166 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:45.166 BaseBdev2 00:26:45.166 BaseBdev3' 00:26:45.166 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:45.166 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:45.166 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:45.423 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:45.423 "name": "BaseBdev1", 00:26:45.423 "aliases": [ 00:26:45.424 "164fe747-12a9-42a2-9f3e-168fc1a552da" 00:26:45.424 ], 00:26:45.424 "product_name": "Malloc disk", 00:26:45.424 "block_size": 512, 00:26:45.424 "num_blocks": 65536, 00:26:45.424 "uuid": "164fe747-12a9-42a2-9f3e-168fc1a552da", 00:26:45.424 "assigned_rate_limits": { 00:26:45.424 "rw_ios_per_sec": 0, 00:26:45.424 "rw_mbytes_per_sec": 0, 00:26:45.424 "r_mbytes_per_sec": 0, 00:26:45.424 "w_mbytes_per_sec": 0 00:26:45.424 }, 00:26:45.424 "claimed": true, 00:26:45.424 "claim_type": "exclusive_write", 00:26:45.424 "zoned": false, 00:26:45.424 "supported_io_types": { 00:26:45.424 "read": true, 00:26:45.424 "write": true, 00:26:45.424 "unmap": true, 00:26:45.424 "flush": true, 00:26:45.424 "reset": true, 00:26:45.424 "nvme_admin": false, 00:26:45.424 "nvme_io": false, 00:26:45.424 "nvme_io_md": false, 00:26:45.424 "write_zeroes": true, 00:26:45.424 "zcopy": true, 00:26:45.424 "get_zone_info": false, 00:26:45.424 "zone_management": false, 00:26:45.424 "zone_append": false, 00:26:45.424 "compare": false, 00:26:45.424 "compare_and_write": false, 00:26:45.424 "abort": true, 00:26:45.424 "seek_hole": false, 00:26:45.424 "seek_data": false, 00:26:45.424 "copy": true, 00:26:45.424 "nvme_iov_md": false 00:26:45.424 }, 00:26:45.424 "memory_domains": [ 00:26:45.424 { 00:26:45.424 "dma_device_id": "system", 00:26:45.424 "dma_device_type": 1 00:26:45.424 }, 00:26:45.424 { 00:26:45.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.424 "dma_device_type": 2 00:26:45.424 } 00:26:45.424 ], 00:26:45.424 "driver_specific": {} 00:26:45.424 }' 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:45.424 15:20:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:45.988 "name": "BaseBdev2", 00:26:45.988 "aliases": [ 00:26:45.988 "d0d9eff0-d686-4aa2-b16c-c329bf1541b2" 00:26:45.988 ], 00:26:45.988 "product_name": "Malloc disk", 00:26:45.988 "block_size": 512, 00:26:45.988 "num_blocks": 65536, 00:26:45.988 "uuid": "d0d9eff0-d686-4aa2-b16c-c329bf1541b2", 00:26:45.988 "assigned_rate_limits": { 00:26:45.988 "rw_ios_per_sec": 0, 00:26:45.988 "rw_mbytes_per_sec": 0, 00:26:45.988 "r_mbytes_per_sec": 0, 00:26:45.988 "w_mbytes_per_sec": 0 00:26:45.988 }, 00:26:45.988 "claimed": true, 00:26:45.988 "claim_type": "exclusive_write", 00:26:45.988 "zoned": false, 00:26:45.988 "supported_io_types": { 00:26:45.988 "read": true, 00:26:45.988 "write": true, 00:26:45.988 "unmap": true, 00:26:45.988 "flush": true, 00:26:45.988 "reset": true, 00:26:45.988 "nvme_admin": false, 00:26:45.988 "nvme_io": false, 00:26:45.988 "nvme_io_md": false, 00:26:45.988 "write_zeroes": true, 00:26:45.988 "zcopy": true, 00:26:45.988 "get_zone_info": false, 00:26:45.988 "zone_management": false, 00:26:45.988 "zone_append": false, 00:26:45.988 "compare": false, 00:26:45.988 "compare_and_write": false, 00:26:45.988 "abort": true, 00:26:45.988 "seek_hole": false, 00:26:45.988 "seek_data": false, 00:26:45.988 "copy": true, 00:26:45.988 "nvme_iov_md": false 00:26:45.988 }, 00:26:45.988 "memory_domains": [ 00:26:45.988 { 00:26:45.988 "dma_device_id": "system", 00:26:45.988 "dma_device_type": 1 00:26:45.988 }, 00:26:45.988 { 00:26:45.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.988 "dma_device_type": 2 00:26:45.988 } 00:26:45.988 ], 00:26:45.988 "driver_specific": {} 00:26:45.988 }' 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:45.988 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:46.246 "name": "BaseBdev3", 00:26:46.246 "aliases": [ 00:26:46.246 "60827d8b-a087-4546-896d-d2bcaf4f4d3a" 00:26:46.246 ], 00:26:46.246 "product_name": "Malloc disk", 00:26:46.246 "block_size": 512, 00:26:46.246 "num_blocks": 65536, 00:26:46.246 "uuid": "60827d8b-a087-4546-896d-d2bcaf4f4d3a", 00:26:46.246 "assigned_rate_limits": { 00:26:46.246 "rw_ios_per_sec": 0, 00:26:46.246 "rw_mbytes_per_sec": 0, 00:26:46.246 "r_mbytes_per_sec": 0, 00:26:46.246 "w_mbytes_per_sec": 0 00:26:46.246 }, 00:26:46.246 "claimed": true, 00:26:46.246 "claim_type": "exclusive_write", 00:26:46.246 "zoned": false, 00:26:46.246 "supported_io_types": { 00:26:46.246 "read": true, 00:26:46.246 "write": true, 00:26:46.246 "unmap": true, 00:26:46.246 "flush": true, 00:26:46.246 "reset": true, 00:26:46.246 "nvme_admin": false, 00:26:46.246 "nvme_io": false, 00:26:46.246 "nvme_io_md": false, 00:26:46.246 "write_zeroes": true, 00:26:46.246 "zcopy": true, 00:26:46.246 "get_zone_info": false, 00:26:46.246 "zone_management": false, 00:26:46.246 "zone_append": false, 00:26:46.246 "compare": false, 00:26:46.246 "compare_and_write": false, 00:26:46.246 "abort": true, 00:26:46.246 "seek_hole": false, 00:26:46.246 "seek_data": false, 00:26:46.246 "copy": true, 00:26:46.246 "nvme_iov_md": false 00:26:46.246 }, 00:26:46.246 "memory_domains": [ 00:26:46.246 { 00:26:46.246 "dma_device_id": "system", 00:26:46.246 "dma_device_type": 1 00:26:46.246 }, 00:26:46.246 { 00:26:46.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.246 "dma_device_type": 2 00:26:46.246 } 00:26:46.246 ], 00:26:46.246 "driver_specific": {} 00:26:46.246 }' 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:46.246 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:46.503 [2024-07-23 15:20:41.795729] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:46.503 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.504 15:20:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:46.761 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.761 "name": "Existed_Raid", 00:26:46.761 "uuid": "48e05571-170f-492e-8819-fc7264d06704", 00:26:46.761 "strip_size_kb": 64, 00:26:46.761 "state": "online", 00:26:46.761 "raid_level": "raid5f", 00:26:46.761 "superblock": false, 00:26:46.761 "num_base_bdevs": 3, 00:26:46.761 "num_base_bdevs_discovered": 2, 00:26:46.761 "num_base_bdevs_operational": 2, 00:26:46.761 "base_bdevs_list": [ 00:26:46.761 { 00:26:46.761 "name": null, 00:26:46.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.761 "is_configured": false, 00:26:46.761 "data_offset": 0, 00:26:46.761 "data_size": 65536 00:26:46.761 }, 00:26:46.761 { 00:26:46.761 "name": "BaseBdev2", 00:26:46.761 "uuid": "d0d9eff0-d686-4aa2-b16c-c329bf1541b2", 00:26:46.761 "is_configured": true, 00:26:46.761 "data_offset": 0, 00:26:46.761 "data_size": 65536 00:26:46.761 }, 00:26:46.761 { 00:26:46.761 "name": "BaseBdev3", 00:26:46.761 "uuid": "60827d8b-a087-4546-896d-d2bcaf4f4d3a", 00:26:46.761 "is_configured": true, 00:26:46.761 "data_offset": 0, 00:26:46.761 "data_size": 65536 00:26:46.761 } 00:26:46.761 ] 00:26:46.761 }' 00:26:46.761 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.761 15:20:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.018 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:47.018 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:47.018 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.018 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:47.276 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:47.276 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:47.276 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:47.276 [2024-07-23 15:20:42.692387] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:47.276 [2024-07-23 15:20:42.692505] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:47.276 [2024-07-23 15:20:42.704736] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:47.533 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:47.533 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:47.533 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:47.533 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.791 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:47.791 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:47.791 15:20:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:47.791 [2024-07-23 15:20:43.140953] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:47.791 [2024-07-23 15:20:43.141025] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:26:47.791 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:47.791 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:47.791 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.791 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:48.048 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:48.049 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:48.049 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:26:48.049 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:48.049 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:48.049 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:48.306 BaseBdev2 00:26:48.306 15:20:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:48.306 15:20:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:48.306 15:20:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:48.306 15:20:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:48.306 15:20:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:48.306 15:20:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:48.306 15:20:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:48.563 15:20:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:48.821 [ 00:26:48.821 { 00:26:48.821 "name": "BaseBdev2", 00:26:48.821 "aliases": [ 00:26:48.821 "531149a0-5671-46b9-a9a2-8b9c93825fd5" 00:26:48.821 ], 00:26:48.821 "product_name": "Malloc disk", 00:26:48.821 "block_size": 512, 00:26:48.821 "num_blocks": 65536, 00:26:48.821 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:48.821 "assigned_rate_limits": { 00:26:48.821 "rw_ios_per_sec": 0, 00:26:48.821 "rw_mbytes_per_sec": 0, 00:26:48.821 "r_mbytes_per_sec": 0, 00:26:48.821 "w_mbytes_per_sec": 0 00:26:48.821 }, 00:26:48.821 "claimed": false, 00:26:48.821 "zoned": false, 00:26:48.821 "supported_io_types": { 00:26:48.821 "read": true, 00:26:48.821 "write": true, 00:26:48.821 "unmap": true, 00:26:48.821 "flush": true, 00:26:48.821 "reset": true, 00:26:48.821 "nvme_admin": false, 00:26:48.821 "nvme_io": false, 00:26:48.821 "nvme_io_md": false, 00:26:48.821 "write_zeroes": true, 00:26:48.821 "zcopy": true, 00:26:48.821 "get_zone_info": false, 00:26:48.821 "zone_management": false, 00:26:48.821 "zone_append": false, 00:26:48.821 "compare": false, 00:26:48.821 "compare_and_write": false, 00:26:48.821 "abort": true, 00:26:48.821 "seek_hole": false, 00:26:48.821 "seek_data": false, 00:26:48.821 "copy": true, 00:26:48.821 "nvme_iov_md": false 00:26:48.821 }, 00:26:48.821 "memory_domains": [ 00:26:48.821 { 00:26:48.821 "dma_device_id": "system", 00:26:48.821 "dma_device_type": 1 00:26:48.821 }, 00:26:48.821 { 00:26:48.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.821 "dma_device_type": 2 00:26:48.821 } 00:26:48.821 ], 00:26:48.821 "driver_specific": {} 00:26:48.821 } 00:26:48.821 ] 00:26:48.821 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:48.821 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:48.821 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:48.821 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:49.079 BaseBdev3 00:26:49.079 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:49.079 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:49.079 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:49.079 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:49.079 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:49.079 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:49.079 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:49.079 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:49.337 [ 00:26:49.337 { 00:26:49.337 "name": "BaseBdev3", 00:26:49.337 "aliases": [ 00:26:49.337 "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98" 00:26:49.337 ], 00:26:49.337 "product_name": "Malloc disk", 00:26:49.337 "block_size": 512, 00:26:49.337 "num_blocks": 65536, 00:26:49.337 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:49.337 "assigned_rate_limits": { 00:26:49.337 "rw_ios_per_sec": 0, 00:26:49.337 "rw_mbytes_per_sec": 0, 00:26:49.337 "r_mbytes_per_sec": 0, 00:26:49.337 "w_mbytes_per_sec": 0 00:26:49.337 }, 00:26:49.337 "claimed": false, 00:26:49.337 "zoned": false, 00:26:49.337 "supported_io_types": { 00:26:49.337 "read": true, 00:26:49.337 "write": true, 00:26:49.337 "unmap": true, 00:26:49.337 "flush": true, 00:26:49.337 "reset": true, 00:26:49.337 "nvme_admin": false, 00:26:49.337 "nvme_io": false, 00:26:49.337 "nvme_io_md": false, 00:26:49.337 "write_zeroes": true, 00:26:49.337 "zcopy": true, 00:26:49.337 "get_zone_info": false, 00:26:49.337 "zone_management": false, 00:26:49.337 "zone_append": false, 00:26:49.337 "compare": false, 00:26:49.337 "compare_and_write": false, 00:26:49.337 "abort": true, 00:26:49.337 "seek_hole": false, 00:26:49.337 "seek_data": false, 00:26:49.337 "copy": true, 00:26:49.337 "nvme_iov_md": false 00:26:49.337 }, 00:26:49.337 "memory_domains": [ 00:26:49.337 { 00:26:49.337 "dma_device_id": "system", 00:26:49.337 "dma_device_type": 1 00:26:49.337 }, 00:26:49.337 { 00:26:49.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.337 "dma_device_type": 2 00:26:49.337 } 00:26:49.337 ], 00:26:49.337 "driver_specific": {} 00:26:49.337 } 00:26:49.337 ] 00:26:49.337 15:20:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:49.337 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:49.337 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:49.337 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:49.595 [2024-07-23 15:20:44.880532] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:49.595 [2024-07-23 15:20:44.880593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:49.595 [2024-07-23 15:20:44.880634] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:49.595 [2024-07-23 15:20:44.882789] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.595 15:20:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.853 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.853 "name": "Existed_Raid", 00:26:49.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.853 "strip_size_kb": 64, 00:26:49.853 "state": "configuring", 00:26:49.853 "raid_level": "raid5f", 00:26:49.853 "superblock": false, 00:26:49.853 "num_base_bdevs": 3, 00:26:49.853 "num_base_bdevs_discovered": 2, 00:26:49.853 "num_base_bdevs_operational": 3, 00:26:49.853 "base_bdevs_list": [ 00:26:49.853 { 00:26:49.853 "name": "BaseBdev1", 00:26:49.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.853 "is_configured": false, 00:26:49.853 "data_offset": 0, 00:26:49.853 "data_size": 0 00:26:49.853 }, 00:26:49.853 { 00:26:49.853 "name": "BaseBdev2", 00:26:49.853 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:49.853 "is_configured": true, 00:26:49.853 "data_offset": 0, 00:26:49.853 "data_size": 65536 00:26:49.853 }, 00:26:49.853 { 00:26:49.853 "name": "BaseBdev3", 00:26:49.853 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:49.853 "is_configured": true, 00:26:49.853 "data_offset": 0, 00:26:49.853 "data_size": 65536 00:26:49.853 } 00:26:49.853 ] 00:26:49.853 }' 00:26:49.853 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:49.853 15:20:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.112 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:50.371 [2024-07-23 15:20:45.608666] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.371 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.629 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.629 "name": "Existed_Raid", 00:26:50.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.629 "strip_size_kb": 64, 00:26:50.629 "state": "configuring", 00:26:50.629 "raid_level": "raid5f", 00:26:50.629 "superblock": false, 00:26:50.629 "num_base_bdevs": 3, 00:26:50.629 "num_base_bdevs_discovered": 1, 00:26:50.629 "num_base_bdevs_operational": 3, 00:26:50.629 "base_bdevs_list": [ 00:26:50.629 { 00:26:50.629 "name": "BaseBdev1", 00:26:50.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.629 "is_configured": false, 00:26:50.629 "data_offset": 0, 00:26:50.629 "data_size": 0 00:26:50.629 }, 00:26:50.629 { 00:26:50.629 "name": null, 00:26:50.629 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:50.629 "is_configured": false, 00:26:50.629 "data_offset": 0, 00:26:50.629 "data_size": 65536 00:26:50.629 }, 00:26:50.629 { 00:26:50.629 "name": "BaseBdev3", 00:26:50.629 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:50.629 "is_configured": true, 00:26:50.629 "data_offset": 0, 00:26:50.629 "data_size": 65536 00:26:50.629 } 00:26:50.629 ] 00:26:50.629 }' 00:26:50.629 15:20:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.629 15:20:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.887 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.887 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:51.145 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:51.145 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:51.403 [2024-07-23 15:20:46.588230] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:51.403 BaseBdev1 00:26:51.403 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:51.403 15:20:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:51.403 15:20:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:51.403 15:20:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:51.403 15:20:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:51.403 15:20:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:51.403 15:20:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:51.403 15:20:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:51.661 [ 00:26:51.661 { 00:26:51.661 "name": "BaseBdev1", 00:26:51.661 "aliases": [ 00:26:51.661 "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10" 00:26:51.661 ], 00:26:51.661 "product_name": "Malloc disk", 00:26:51.661 "block_size": 512, 00:26:51.661 "num_blocks": 65536, 00:26:51.661 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:51.661 "assigned_rate_limits": { 00:26:51.661 "rw_ios_per_sec": 0, 00:26:51.661 "rw_mbytes_per_sec": 0, 00:26:51.661 "r_mbytes_per_sec": 0, 00:26:51.661 "w_mbytes_per_sec": 0 00:26:51.661 }, 00:26:51.661 "claimed": true, 00:26:51.661 "claim_type": "exclusive_write", 00:26:51.661 "zoned": false, 00:26:51.661 "supported_io_types": { 00:26:51.661 "read": true, 00:26:51.661 "write": true, 00:26:51.661 "unmap": true, 00:26:51.661 "flush": true, 00:26:51.661 "reset": true, 00:26:51.661 "nvme_admin": false, 00:26:51.661 "nvme_io": false, 00:26:51.661 "nvme_io_md": false, 00:26:51.661 "write_zeroes": true, 00:26:51.661 "zcopy": true, 00:26:51.661 "get_zone_info": false, 00:26:51.661 "zone_management": false, 00:26:51.661 "zone_append": false, 00:26:51.661 "compare": false, 00:26:51.661 "compare_and_write": false, 00:26:51.661 "abort": true, 00:26:51.661 "seek_hole": false, 00:26:51.661 "seek_data": false, 00:26:51.661 "copy": true, 00:26:51.661 "nvme_iov_md": false 00:26:51.661 }, 00:26:51.661 "memory_domains": [ 00:26:51.661 { 00:26:51.661 "dma_device_id": "system", 00:26:51.661 "dma_device_type": 1 00:26:51.661 }, 00:26:51.661 { 00:26:51.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.661 "dma_device_type": 2 00:26:51.661 } 00:26:51.661 ], 00:26:51.661 "driver_specific": {} 00:26:51.661 } 00:26:51.661 ] 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.661 15:20:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.919 15:20:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.919 "name": "Existed_Raid", 00:26:51.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.919 "strip_size_kb": 64, 00:26:51.919 "state": "configuring", 00:26:51.919 "raid_level": "raid5f", 00:26:51.919 "superblock": false, 00:26:51.919 "num_base_bdevs": 3, 00:26:51.919 "num_base_bdevs_discovered": 2, 00:26:51.919 "num_base_bdevs_operational": 3, 00:26:51.919 "base_bdevs_list": [ 00:26:51.919 { 00:26:51.919 "name": "BaseBdev1", 00:26:51.919 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:51.919 "is_configured": true, 00:26:51.919 "data_offset": 0, 00:26:51.919 "data_size": 65536 00:26:51.919 }, 00:26:51.919 { 00:26:51.919 "name": null, 00:26:51.919 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:51.919 "is_configured": false, 00:26:51.919 "data_offset": 0, 00:26:51.919 "data_size": 65536 00:26:51.919 }, 00:26:51.919 { 00:26:51.919 "name": "BaseBdev3", 00:26:51.919 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:51.919 "is_configured": true, 00:26:51.919 "data_offset": 0, 00:26:51.919 "data_size": 65536 00:26:51.919 } 00:26:51.919 ] 00:26:51.919 }' 00:26:51.919 15:20:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.919 15:20:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.177 15:20:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.177 15:20:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:52.435 15:20:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:52.435 15:20:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:52.693 [2024-07-23 15:20:48.072684] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.693 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.951 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.951 "name": "Existed_Raid", 00:26:52.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.951 "strip_size_kb": 64, 00:26:52.951 "state": "configuring", 00:26:52.951 "raid_level": "raid5f", 00:26:52.951 "superblock": false, 00:26:52.951 "num_base_bdevs": 3, 00:26:52.951 "num_base_bdevs_discovered": 1, 00:26:52.951 "num_base_bdevs_operational": 3, 00:26:52.951 "base_bdevs_list": [ 00:26:52.951 { 00:26:52.951 "name": "BaseBdev1", 00:26:52.951 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:52.951 "is_configured": true, 00:26:52.951 "data_offset": 0, 00:26:52.951 "data_size": 65536 00:26:52.951 }, 00:26:52.951 { 00:26:52.951 "name": null, 00:26:52.951 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:52.951 "is_configured": false, 00:26:52.951 "data_offset": 0, 00:26:52.951 "data_size": 65536 00:26:52.951 }, 00:26:52.951 { 00:26:52.951 "name": null, 00:26:52.951 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:52.951 "is_configured": false, 00:26:52.951 "data_offset": 0, 00:26:52.951 "data_size": 65536 00:26:52.951 } 00:26:52.951 ] 00:26:52.951 }' 00:26:52.951 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.951 15:20:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.517 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.517 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:53.517 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:53.517 15:20:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:53.775 [2024-07-23 15:20:49.096953] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.775 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.034 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.034 "name": "Existed_Raid", 00:26:54.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.034 "strip_size_kb": 64, 00:26:54.034 "state": "configuring", 00:26:54.034 "raid_level": "raid5f", 00:26:54.034 "superblock": false, 00:26:54.034 "num_base_bdevs": 3, 00:26:54.034 "num_base_bdevs_discovered": 2, 00:26:54.034 "num_base_bdevs_operational": 3, 00:26:54.034 "base_bdevs_list": [ 00:26:54.034 { 00:26:54.034 "name": "BaseBdev1", 00:26:54.034 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:54.034 "is_configured": true, 00:26:54.034 "data_offset": 0, 00:26:54.034 "data_size": 65536 00:26:54.034 }, 00:26:54.034 { 00:26:54.034 "name": null, 00:26:54.034 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:54.034 "is_configured": false, 00:26:54.034 "data_offset": 0, 00:26:54.034 "data_size": 65536 00:26:54.034 }, 00:26:54.034 { 00:26:54.034 "name": "BaseBdev3", 00:26:54.034 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:54.034 "is_configured": true, 00:26:54.034 "data_offset": 0, 00:26:54.034 "data_size": 65536 00:26:54.034 } 00:26:54.034 ] 00:26:54.034 }' 00:26:54.034 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.034 15:20:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.292 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:54.292 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.549 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:54.549 15:20:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:54.806 [2024-07-23 15:20:50.121208] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:54.806 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:54.806 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:54.806 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:54.806 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:54.806 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:54.806 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:54.806 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.806 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.807 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.807 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.807 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.807 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.064 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.064 "name": "Existed_Raid", 00:26:55.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.064 "strip_size_kb": 64, 00:26:55.064 "state": "configuring", 00:26:55.064 "raid_level": "raid5f", 00:26:55.064 "superblock": false, 00:26:55.064 "num_base_bdevs": 3, 00:26:55.064 "num_base_bdevs_discovered": 1, 00:26:55.064 "num_base_bdevs_operational": 3, 00:26:55.064 "base_bdevs_list": [ 00:26:55.064 { 00:26:55.064 "name": null, 00:26:55.064 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:55.064 "is_configured": false, 00:26:55.064 "data_offset": 0, 00:26:55.064 "data_size": 65536 00:26:55.064 }, 00:26:55.064 { 00:26:55.064 "name": null, 00:26:55.064 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:55.064 "is_configured": false, 00:26:55.064 "data_offset": 0, 00:26:55.064 "data_size": 65536 00:26:55.064 }, 00:26:55.064 { 00:26:55.064 "name": "BaseBdev3", 00:26:55.064 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:55.064 "is_configured": true, 00:26:55.064 "data_offset": 0, 00:26:55.064 "data_size": 65536 00:26:55.064 } 00:26:55.064 ] 00:26:55.064 }' 00:26:55.064 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.064 15:20:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.322 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.322 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:55.579 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:55.579 15:20:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:55.836 [2024-07-23 15:20:51.017874] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.836 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.094 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:56.094 "name": "Existed_Raid", 00:26:56.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.094 "strip_size_kb": 64, 00:26:56.094 "state": "configuring", 00:26:56.094 "raid_level": "raid5f", 00:26:56.094 "superblock": false, 00:26:56.094 "num_base_bdevs": 3, 00:26:56.094 "num_base_bdevs_discovered": 2, 00:26:56.094 "num_base_bdevs_operational": 3, 00:26:56.094 "base_bdevs_list": [ 00:26:56.094 { 00:26:56.094 "name": null, 00:26:56.094 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:56.094 "is_configured": false, 00:26:56.094 "data_offset": 0, 00:26:56.094 "data_size": 65536 00:26:56.094 }, 00:26:56.094 { 00:26:56.094 "name": "BaseBdev2", 00:26:56.094 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:56.094 "is_configured": true, 00:26:56.094 "data_offset": 0, 00:26:56.094 "data_size": 65536 00:26:56.094 }, 00:26:56.094 { 00:26:56.094 "name": "BaseBdev3", 00:26:56.094 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:56.094 "is_configured": true, 00:26:56.094 "data_offset": 0, 00:26:56.094 "data_size": 65536 00:26:56.094 } 00:26:56.094 ] 00:26:56.094 }' 00:26:56.094 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:56.094 15:20:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.351 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.351 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:56.609 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:56.609 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.609 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:56.609 15:20:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c6d8c4bf-d988-42d6-b871-4d57ff3a3a10 00:26:56.867 [2024-07-23 15:20:52.225322] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:56.867 [2024-07-23 15:20:52.225375] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:26:56.867 [2024-07-23 15:20:52.225388] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:56.867 [2024-07-23 15:20:52.225456] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:26:56.867 [2024-07-23 15:20:52.226280] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:26:56.867 [2024-07-23 15:20:52.226402] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007880 00:26:56.867 [2024-07-23 15:20:52.226707] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.867 NewBaseBdev 00:26:56.867 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:56.867 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:56.867 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:56.867 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:56.867 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:56.867 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:56.867 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:57.125 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:57.384 [ 00:26:57.384 { 00:26:57.384 "name": "NewBaseBdev", 00:26:57.384 "aliases": [ 00:26:57.384 "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10" 00:26:57.384 ], 00:26:57.384 "product_name": "Malloc disk", 00:26:57.384 "block_size": 512, 00:26:57.384 "num_blocks": 65536, 00:26:57.384 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:57.384 "assigned_rate_limits": { 00:26:57.384 "rw_ios_per_sec": 0, 00:26:57.384 "rw_mbytes_per_sec": 0, 00:26:57.384 "r_mbytes_per_sec": 0, 00:26:57.384 "w_mbytes_per_sec": 0 00:26:57.384 }, 00:26:57.384 "claimed": true, 00:26:57.384 "claim_type": "exclusive_write", 00:26:57.384 "zoned": false, 00:26:57.384 "supported_io_types": { 00:26:57.384 "read": true, 00:26:57.384 "write": true, 00:26:57.384 "unmap": true, 00:26:57.384 "flush": true, 00:26:57.384 "reset": true, 00:26:57.384 "nvme_admin": false, 00:26:57.384 "nvme_io": false, 00:26:57.384 "nvme_io_md": false, 00:26:57.384 "write_zeroes": true, 00:26:57.384 "zcopy": true, 00:26:57.384 "get_zone_info": false, 00:26:57.384 "zone_management": false, 00:26:57.384 "zone_append": false, 00:26:57.384 "compare": false, 00:26:57.384 "compare_and_write": false, 00:26:57.384 "abort": true, 00:26:57.384 "seek_hole": false, 00:26:57.384 "seek_data": false, 00:26:57.384 "copy": true, 00:26:57.384 "nvme_iov_md": false 00:26:57.384 }, 00:26:57.384 "memory_domains": [ 00:26:57.384 { 00:26:57.384 "dma_device_id": "system", 00:26:57.385 "dma_device_type": 1 00:26:57.385 }, 00:26:57.385 { 00:26:57.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.385 "dma_device_type": 2 00:26:57.385 } 00:26:57.385 ], 00:26:57.385 "driver_specific": {} 00:26:57.385 } 00:26:57.385 ] 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.385 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:57.643 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.643 "name": "Existed_Raid", 00:26:57.643 "uuid": "b4e94019-030b-4b9d-a87f-aaba19944727", 00:26:57.643 "strip_size_kb": 64, 00:26:57.643 "state": "online", 00:26:57.643 "raid_level": "raid5f", 00:26:57.643 "superblock": false, 00:26:57.643 "num_base_bdevs": 3, 00:26:57.643 "num_base_bdevs_discovered": 3, 00:26:57.643 "num_base_bdevs_operational": 3, 00:26:57.643 "base_bdevs_list": [ 00:26:57.643 { 00:26:57.643 "name": "NewBaseBdev", 00:26:57.643 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:57.643 "is_configured": true, 00:26:57.643 "data_offset": 0, 00:26:57.643 "data_size": 65536 00:26:57.643 }, 00:26:57.643 { 00:26:57.643 "name": "BaseBdev2", 00:26:57.643 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:57.643 "is_configured": true, 00:26:57.643 "data_offset": 0, 00:26:57.643 "data_size": 65536 00:26:57.643 }, 00:26:57.643 { 00:26:57.643 "name": "BaseBdev3", 00:26:57.643 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:57.643 "is_configured": true, 00:26:57.643 "data_offset": 0, 00:26:57.643 "data_size": 65536 00:26:57.643 } 00:26:57.643 ] 00:26:57.643 }' 00:26:57.643 15:20:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.643 15:20:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.900 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:57.901 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:57.901 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:57.901 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:57.901 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:57.901 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:57.901 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:57.901 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:58.157 [2024-07-23 15:20:53.453939] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:58.157 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:58.157 "name": "Existed_Raid", 00:26:58.157 "aliases": [ 00:26:58.158 "b4e94019-030b-4b9d-a87f-aaba19944727" 00:26:58.158 ], 00:26:58.158 "product_name": "Raid Volume", 00:26:58.158 "block_size": 512, 00:26:58.158 "num_blocks": 131072, 00:26:58.158 "uuid": "b4e94019-030b-4b9d-a87f-aaba19944727", 00:26:58.158 "assigned_rate_limits": { 00:26:58.158 "rw_ios_per_sec": 0, 00:26:58.158 "rw_mbytes_per_sec": 0, 00:26:58.158 "r_mbytes_per_sec": 0, 00:26:58.158 "w_mbytes_per_sec": 0 00:26:58.158 }, 00:26:58.158 "claimed": false, 00:26:58.158 "zoned": false, 00:26:58.158 "supported_io_types": { 00:26:58.158 "read": true, 00:26:58.158 "write": true, 00:26:58.158 "unmap": false, 00:26:58.158 "flush": false, 00:26:58.158 "reset": true, 00:26:58.158 "nvme_admin": false, 00:26:58.158 "nvme_io": false, 00:26:58.158 "nvme_io_md": false, 00:26:58.158 "write_zeroes": true, 00:26:58.158 "zcopy": false, 00:26:58.158 "get_zone_info": false, 00:26:58.158 "zone_management": false, 00:26:58.158 "zone_append": false, 00:26:58.158 "compare": false, 00:26:58.158 "compare_and_write": false, 00:26:58.158 "abort": false, 00:26:58.158 "seek_hole": false, 00:26:58.158 "seek_data": false, 00:26:58.158 "copy": false, 00:26:58.158 "nvme_iov_md": false 00:26:58.158 }, 00:26:58.158 "driver_specific": { 00:26:58.158 "raid": { 00:26:58.158 "uuid": "b4e94019-030b-4b9d-a87f-aaba19944727", 00:26:58.158 "strip_size_kb": 64, 00:26:58.158 "state": "online", 00:26:58.158 "raid_level": "raid5f", 00:26:58.158 "superblock": false, 00:26:58.158 "num_base_bdevs": 3, 00:26:58.158 "num_base_bdevs_discovered": 3, 00:26:58.158 "num_base_bdevs_operational": 3, 00:26:58.158 "base_bdevs_list": [ 00:26:58.158 { 00:26:58.158 "name": "NewBaseBdev", 00:26:58.158 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:58.158 "is_configured": true, 00:26:58.158 "data_offset": 0, 00:26:58.158 "data_size": 65536 00:26:58.158 }, 00:26:58.158 { 00:26:58.158 "name": "BaseBdev2", 00:26:58.158 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:58.158 "is_configured": true, 00:26:58.158 "data_offset": 0, 00:26:58.158 "data_size": 65536 00:26:58.158 }, 00:26:58.158 { 00:26:58.158 "name": "BaseBdev3", 00:26:58.158 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:58.158 "is_configured": true, 00:26:58.158 "data_offset": 0, 00:26:58.158 "data_size": 65536 00:26:58.158 } 00:26:58.158 ] 00:26:58.158 } 00:26:58.158 } 00:26:58.158 }' 00:26:58.158 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:58.158 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:58.158 BaseBdev2 00:26:58.158 BaseBdev3' 00:26:58.158 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:58.158 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:58.158 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:58.415 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:58.416 "name": "NewBaseBdev", 00:26:58.416 "aliases": [ 00:26:58.416 "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10" 00:26:58.416 ], 00:26:58.416 "product_name": "Malloc disk", 00:26:58.416 "block_size": 512, 00:26:58.416 "num_blocks": 65536, 00:26:58.416 "uuid": "c6d8c4bf-d988-42d6-b871-4d57ff3a3a10", 00:26:58.416 "assigned_rate_limits": { 00:26:58.416 "rw_ios_per_sec": 0, 00:26:58.416 "rw_mbytes_per_sec": 0, 00:26:58.416 "r_mbytes_per_sec": 0, 00:26:58.416 "w_mbytes_per_sec": 0 00:26:58.416 }, 00:26:58.416 "claimed": true, 00:26:58.416 "claim_type": "exclusive_write", 00:26:58.416 "zoned": false, 00:26:58.416 "supported_io_types": { 00:26:58.416 "read": true, 00:26:58.416 "write": true, 00:26:58.416 "unmap": true, 00:26:58.416 "flush": true, 00:26:58.416 "reset": true, 00:26:58.416 "nvme_admin": false, 00:26:58.416 "nvme_io": false, 00:26:58.416 "nvme_io_md": false, 00:26:58.416 "write_zeroes": true, 00:26:58.416 "zcopy": true, 00:26:58.416 "get_zone_info": false, 00:26:58.416 "zone_management": false, 00:26:58.416 "zone_append": false, 00:26:58.416 "compare": false, 00:26:58.416 "compare_and_write": false, 00:26:58.416 "abort": true, 00:26:58.416 "seek_hole": false, 00:26:58.416 "seek_data": false, 00:26:58.416 "copy": true, 00:26:58.416 "nvme_iov_md": false 00:26:58.416 }, 00:26:58.416 "memory_domains": [ 00:26:58.416 { 00:26:58.416 "dma_device_id": "system", 00:26:58.416 "dma_device_type": 1 00:26:58.416 }, 00:26:58.416 { 00:26:58.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.416 "dma_device_type": 2 00:26:58.416 } 00:26:58.416 ], 00:26:58.416 "driver_specific": {} 00:26:58.416 }' 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:58.416 15:20:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:58.673 "name": "BaseBdev2", 00:26:58.673 "aliases": [ 00:26:58.673 "531149a0-5671-46b9-a9a2-8b9c93825fd5" 00:26:58.673 ], 00:26:58.673 "product_name": "Malloc disk", 00:26:58.673 "block_size": 512, 00:26:58.673 "num_blocks": 65536, 00:26:58.673 "uuid": "531149a0-5671-46b9-a9a2-8b9c93825fd5", 00:26:58.673 "assigned_rate_limits": { 00:26:58.673 "rw_ios_per_sec": 0, 00:26:58.673 "rw_mbytes_per_sec": 0, 00:26:58.673 "r_mbytes_per_sec": 0, 00:26:58.673 "w_mbytes_per_sec": 0 00:26:58.673 }, 00:26:58.673 "claimed": true, 00:26:58.673 "claim_type": "exclusive_write", 00:26:58.673 "zoned": false, 00:26:58.673 "supported_io_types": { 00:26:58.673 "read": true, 00:26:58.673 "write": true, 00:26:58.673 "unmap": true, 00:26:58.673 "flush": true, 00:26:58.673 "reset": true, 00:26:58.673 "nvme_admin": false, 00:26:58.673 "nvme_io": false, 00:26:58.673 "nvme_io_md": false, 00:26:58.673 "write_zeroes": true, 00:26:58.673 "zcopy": true, 00:26:58.673 "get_zone_info": false, 00:26:58.673 "zone_management": false, 00:26:58.673 "zone_append": false, 00:26:58.673 "compare": false, 00:26:58.673 "compare_and_write": false, 00:26:58.673 "abort": true, 00:26:58.673 "seek_hole": false, 00:26:58.673 "seek_data": false, 00:26:58.673 "copy": true, 00:26:58.673 "nvme_iov_md": false 00:26:58.673 }, 00:26:58.673 "memory_domains": [ 00:26:58.673 { 00:26:58.673 "dma_device_id": "system", 00:26:58.673 "dma_device_type": 1 00:26:58.673 }, 00:26:58.673 { 00:26:58.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.673 "dma_device_type": 2 00:26:58.673 } 00:26:58.673 ], 00:26:58.673 "driver_specific": {} 00:26:58.673 }' 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:58.673 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.930 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.930 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:58.930 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:58.930 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:58.930 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:59.188 "name": "BaseBdev3", 00:26:59.188 "aliases": [ 00:26:59.188 "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98" 00:26:59.188 ], 00:26:59.188 "product_name": "Malloc disk", 00:26:59.188 "block_size": 512, 00:26:59.188 "num_blocks": 65536, 00:26:59.188 "uuid": "8e311c7c-6c6f-46a0-adad-1dd90f6b5b98", 00:26:59.188 "assigned_rate_limits": { 00:26:59.188 "rw_ios_per_sec": 0, 00:26:59.188 "rw_mbytes_per_sec": 0, 00:26:59.188 "r_mbytes_per_sec": 0, 00:26:59.188 "w_mbytes_per_sec": 0 00:26:59.188 }, 00:26:59.188 "claimed": true, 00:26:59.188 "claim_type": "exclusive_write", 00:26:59.188 "zoned": false, 00:26:59.188 "supported_io_types": { 00:26:59.188 "read": true, 00:26:59.188 "write": true, 00:26:59.188 "unmap": true, 00:26:59.188 "flush": true, 00:26:59.188 "reset": true, 00:26:59.188 "nvme_admin": false, 00:26:59.188 "nvme_io": false, 00:26:59.188 "nvme_io_md": false, 00:26:59.188 "write_zeroes": true, 00:26:59.188 "zcopy": true, 00:26:59.188 "get_zone_info": false, 00:26:59.188 "zone_management": false, 00:26:59.188 "zone_append": false, 00:26:59.188 "compare": false, 00:26:59.188 "compare_and_write": false, 00:26:59.188 "abort": true, 00:26:59.188 "seek_hole": false, 00:26:59.188 "seek_data": false, 00:26:59.188 "copy": true, 00:26:59.188 "nvme_iov_md": false 00:26:59.188 }, 00:26:59.188 "memory_domains": [ 00:26:59.188 { 00:26:59.188 "dma_device_id": "system", 00:26:59.188 "dma_device_type": 1 00:26:59.188 }, 00:26:59.188 { 00:26:59.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.188 "dma_device_type": 2 00:26:59.188 } 00:26:59.188 ], 00:26:59.188 "driver_specific": {} 00:26:59.188 }' 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:59.188 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:59.446 [2024-07-23 15:20:54.702031] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:59.446 [2024-07-23 15:20:54.702076] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:59.446 [2024-07-23 15:20:54.702164] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:59.446 [2024-07-23 15:20:54.702413] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:59.446 [2024-07-23 15:20:54.702429] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name Existed_Raid, state offline 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 112518 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 112518 ']' 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 112518 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112518 00:26:59.446 killing process with pid 112518 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112518' 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 112518 00:26:59.446 [2024-07-23 15:20:54.765437] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:59.446 15:20:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 112518 00:26:59.446 [2024-07-23 15:20:54.800684] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:26:59.704 00:26:59.704 real 0m21.282s 00:26:59.704 user 0m37.144s 00:26:59.704 sys 0m4.649s 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.704 ************************************ 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 END TEST raid5f_state_function_test 00:26:59.704 ************************************ 00:26:59.704 15:20:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:59.704 15:20:55 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:26:59.704 15:20:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:59.704 15:20:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.704 15:20:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 ************************************ 00:26:59.704 START TEST raid5f_state_function_test_sb 00:26:59.704 ************************************ 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 true 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:59.704 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=113363 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 113363' 00:26:59.705 Process raid pid: 113363 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 113363 /var/tmp/spdk-raid.sock 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 113363 ']' 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.705 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.963 [2024-07-23 15:20:55.175553] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:26:59.963 [2024-07-23 15:20:55.175690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.963 [2024-07-23 15:20:55.317838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.963 [2024-07-23 15:20:55.361919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.222 [2024-07-23 15:20:55.407056] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:00.222 [2024-07-23 15:20:55.617060] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:00.222 [2024-07-23 15:20:55.617123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:00.222 [2024-07-23 15:20:55.617136] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:00.222 [2024-07-23 15:20:55.617149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:00.222 [2024-07-23 15:20:55.617162] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:00.222 [2024-07-23 15:20:55.617174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.222 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:00.479 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:00.479 "name": "Existed_Raid", 00:27:00.479 "uuid": "0fb1a660-e80b-4932-adc8-6dfb72a42853", 00:27:00.479 "strip_size_kb": 64, 00:27:00.479 "state": "configuring", 00:27:00.479 "raid_level": "raid5f", 00:27:00.479 "superblock": true, 00:27:00.479 "num_base_bdevs": 3, 00:27:00.479 "num_base_bdevs_discovered": 0, 00:27:00.479 "num_base_bdevs_operational": 3, 00:27:00.479 "base_bdevs_list": [ 00:27:00.479 { 00:27:00.479 "name": "BaseBdev1", 00:27:00.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.479 "is_configured": false, 00:27:00.479 "data_offset": 0, 00:27:00.479 "data_size": 0 00:27:00.479 }, 00:27:00.479 { 00:27:00.479 "name": "BaseBdev2", 00:27:00.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.479 "is_configured": false, 00:27:00.479 "data_offset": 0, 00:27:00.479 "data_size": 0 00:27:00.479 }, 00:27:00.479 { 00:27:00.479 "name": "BaseBdev3", 00:27:00.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.479 "is_configured": false, 00:27:00.479 "data_offset": 0, 00:27:00.479 "data_size": 0 00:27:00.479 } 00:27:00.479 ] 00:27:00.479 }' 00:27:00.479 15:20:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:00.479 15:20:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.736 15:20:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:00.993 [2024-07-23 15:20:56.405061] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:00.993 [2024-07-23 15:20:56.405122] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:27:01.251 15:20:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:01.251 [2024-07-23 15:20:56.621179] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:01.251 [2024-07-23 15:20:56.621263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:01.251 [2024-07-23 15:20:56.621275] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:01.251 [2024-07-23 15:20:56.621288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:01.251 [2024-07-23 15:20:56.621296] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:01.251 [2024-07-23 15:20:56.621308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:01.251 15:20:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:01.510 [2024-07-23 15:20:56.798895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:01.510 BaseBdev1 00:27:01.510 15:20:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:01.510 15:20:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:01.510 15:20:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:01.510 15:20:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:01.510 15:20:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:01.510 15:20:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:01.510 15:20:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:01.767 15:20:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:02.027 [ 00:27:02.027 { 00:27:02.027 "name": "BaseBdev1", 00:27:02.027 "aliases": [ 00:27:02.027 "9f5bf543-9b47-4df5-b327-9c70add67301" 00:27:02.027 ], 00:27:02.027 "product_name": "Malloc disk", 00:27:02.027 "block_size": 512, 00:27:02.027 "num_blocks": 65536, 00:27:02.027 "uuid": "9f5bf543-9b47-4df5-b327-9c70add67301", 00:27:02.027 "assigned_rate_limits": { 00:27:02.027 "rw_ios_per_sec": 0, 00:27:02.027 "rw_mbytes_per_sec": 0, 00:27:02.027 "r_mbytes_per_sec": 0, 00:27:02.027 "w_mbytes_per_sec": 0 00:27:02.027 }, 00:27:02.027 "claimed": true, 00:27:02.027 "claim_type": "exclusive_write", 00:27:02.027 "zoned": false, 00:27:02.027 "supported_io_types": { 00:27:02.027 "read": true, 00:27:02.027 "write": true, 00:27:02.027 "unmap": true, 00:27:02.027 "flush": true, 00:27:02.027 "reset": true, 00:27:02.027 "nvme_admin": false, 00:27:02.027 "nvme_io": false, 00:27:02.027 "nvme_io_md": false, 00:27:02.027 "write_zeroes": true, 00:27:02.027 "zcopy": true, 00:27:02.027 "get_zone_info": false, 00:27:02.027 "zone_management": false, 00:27:02.027 "zone_append": false, 00:27:02.027 "compare": false, 00:27:02.027 "compare_and_write": false, 00:27:02.027 "abort": true, 00:27:02.027 "seek_hole": false, 00:27:02.027 "seek_data": false, 00:27:02.027 "copy": true, 00:27:02.027 "nvme_iov_md": false 00:27:02.027 }, 00:27:02.027 "memory_domains": [ 00:27:02.027 { 00:27:02.027 "dma_device_id": "system", 00:27:02.027 "dma_device_type": 1 00:27:02.027 }, 00:27:02.027 { 00:27:02.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.027 "dma_device_type": 2 00:27:02.027 } 00:27:02.027 ], 00:27:02.027 "driver_specific": {} 00:27:02.027 } 00:27:02.027 ] 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.027 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.286 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.286 "name": "Existed_Raid", 00:27:02.286 "uuid": "eed48a97-b9d8-40ed-b258-5941f0a9b505", 00:27:02.286 "strip_size_kb": 64, 00:27:02.286 "state": "configuring", 00:27:02.286 "raid_level": "raid5f", 00:27:02.286 "superblock": true, 00:27:02.286 "num_base_bdevs": 3, 00:27:02.286 "num_base_bdevs_discovered": 1, 00:27:02.286 "num_base_bdevs_operational": 3, 00:27:02.286 "base_bdevs_list": [ 00:27:02.286 { 00:27:02.286 "name": "BaseBdev1", 00:27:02.286 "uuid": "9f5bf543-9b47-4df5-b327-9c70add67301", 00:27:02.286 "is_configured": true, 00:27:02.286 "data_offset": 2048, 00:27:02.286 "data_size": 63488 00:27:02.286 }, 00:27:02.286 { 00:27:02.286 "name": "BaseBdev2", 00:27:02.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.286 "is_configured": false, 00:27:02.286 "data_offset": 0, 00:27:02.286 "data_size": 0 00:27:02.286 }, 00:27:02.286 { 00:27:02.286 "name": "BaseBdev3", 00:27:02.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.286 "is_configured": false, 00:27:02.286 "data_offset": 0, 00:27:02.286 "data_size": 0 00:27:02.286 } 00:27:02.286 ] 00:27:02.286 }' 00:27:02.286 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.286 15:20:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.545 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:02.545 [2024-07-23 15:20:57.891267] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:02.545 [2024-07-23 15:20:57.891337] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:27:02.545 15:20:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:02.804 [2024-07-23 15:20:58.071384] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:02.804 [2024-07-23 15:20:58.073545] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:02.804 [2024-07-23 15:20:58.073594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:02.804 [2024-07-23 15:20:58.073605] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:02.804 [2024-07-23 15:20:58.073618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.804 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:03.064 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:03.064 "name": "Existed_Raid", 00:27:03.064 "uuid": "2fea4bb7-af34-49ca-9681-218c7f7dca01", 00:27:03.064 "strip_size_kb": 64, 00:27:03.064 "state": "configuring", 00:27:03.064 "raid_level": "raid5f", 00:27:03.064 "superblock": true, 00:27:03.064 "num_base_bdevs": 3, 00:27:03.064 "num_base_bdevs_discovered": 1, 00:27:03.064 "num_base_bdevs_operational": 3, 00:27:03.064 "base_bdevs_list": [ 00:27:03.064 { 00:27:03.064 "name": "BaseBdev1", 00:27:03.064 "uuid": "9f5bf543-9b47-4df5-b327-9c70add67301", 00:27:03.064 "is_configured": true, 00:27:03.064 "data_offset": 2048, 00:27:03.064 "data_size": 63488 00:27:03.064 }, 00:27:03.064 { 00:27:03.064 "name": "BaseBdev2", 00:27:03.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.064 "is_configured": false, 00:27:03.064 "data_offset": 0, 00:27:03.064 "data_size": 0 00:27:03.064 }, 00:27:03.064 { 00:27:03.064 "name": "BaseBdev3", 00:27:03.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.064 "is_configured": false, 00:27:03.064 "data_offset": 0, 00:27:03.064 "data_size": 0 00:27:03.064 } 00:27:03.064 ] 00:27:03.064 }' 00:27:03.064 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:03.064 15:20:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.322 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:03.581 [2024-07-23 15:20:58.873344] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:03.581 BaseBdev2 00:27:03.581 15:20:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:03.581 15:20:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:03.581 15:20:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:03.581 15:20:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:03.581 15:20:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:03.581 15:20:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:03.581 15:20:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:03.841 15:20:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:04.100 [ 00:27:04.100 { 00:27:04.100 "name": "BaseBdev2", 00:27:04.100 "aliases": [ 00:27:04.100 "3232f2e5-18d6-4cf7-a615-d610ba481b79" 00:27:04.100 ], 00:27:04.100 "product_name": "Malloc disk", 00:27:04.100 "block_size": 512, 00:27:04.100 "num_blocks": 65536, 00:27:04.100 "uuid": "3232f2e5-18d6-4cf7-a615-d610ba481b79", 00:27:04.100 "assigned_rate_limits": { 00:27:04.100 "rw_ios_per_sec": 0, 00:27:04.100 "rw_mbytes_per_sec": 0, 00:27:04.100 "r_mbytes_per_sec": 0, 00:27:04.101 "w_mbytes_per_sec": 0 00:27:04.101 }, 00:27:04.101 "claimed": true, 00:27:04.101 "claim_type": "exclusive_write", 00:27:04.101 "zoned": false, 00:27:04.101 "supported_io_types": { 00:27:04.101 "read": true, 00:27:04.101 "write": true, 00:27:04.101 "unmap": true, 00:27:04.101 "flush": true, 00:27:04.101 "reset": true, 00:27:04.101 "nvme_admin": false, 00:27:04.101 "nvme_io": false, 00:27:04.101 "nvme_io_md": false, 00:27:04.101 "write_zeroes": true, 00:27:04.101 "zcopy": true, 00:27:04.101 "get_zone_info": false, 00:27:04.101 "zone_management": false, 00:27:04.101 "zone_append": false, 00:27:04.101 "compare": false, 00:27:04.101 "compare_and_write": false, 00:27:04.101 "abort": true, 00:27:04.101 "seek_hole": false, 00:27:04.101 "seek_data": false, 00:27:04.101 "copy": true, 00:27:04.101 "nvme_iov_md": false 00:27:04.101 }, 00:27:04.101 "memory_domains": [ 00:27:04.101 { 00:27:04.101 "dma_device_id": "system", 00:27:04.101 "dma_device_type": 1 00:27:04.101 }, 00:27:04.101 { 00:27:04.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.101 "dma_device_type": 2 00:27:04.101 } 00:27:04.101 ], 00:27:04.101 "driver_specific": {} 00:27:04.101 } 00:27:04.101 ] 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.101 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:04.359 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.359 "name": "Existed_Raid", 00:27:04.359 "uuid": "2fea4bb7-af34-49ca-9681-218c7f7dca01", 00:27:04.359 "strip_size_kb": 64, 00:27:04.359 "state": "configuring", 00:27:04.359 "raid_level": "raid5f", 00:27:04.359 "superblock": true, 00:27:04.359 "num_base_bdevs": 3, 00:27:04.359 "num_base_bdevs_discovered": 2, 00:27:04.359 "num_base_bdevs_operational": 3, 00:27:04.359 "base_bdevs_list": [ 00:27:04.359 { 00:27:04.359 "name": "BaseBdev1", 00:27:04.359 "uuid": "9f5bf543-9b47-4df5-b327-9c70add67301", 00:27:04.359 "is_configured": true, 00:27:04.360 "data_offset": 2048, 00:27:04.360 "data_size": 63488 00:27:04.360 }, 00:27:04.360 { 00:27:04.360 "name": "BaseBdev2", 00:27:04.360 "uuid": "3232f2e5-18d6-4cf7-a615-d610ba481b79", 00:27:04.360 "is_configured": true, 00:27:04.360 "data_offset": 2048, 00:27:04.360 "data_size": 63488 00:27:04.360 }, 00:27:04.360 { 00:27:04.360 "name": "BaseBdev3", 00:27:04.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.360 "is_configured": false, 00:27:04.360 "data_offset": 0, 00:27:04.360 "data_size": 0 00:27:04.360 } 00:27:04.360 ] 00:27:04.360 }' 00:27:04.360 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.360 15:20:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.618 15:20:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:04.876 [2024-07-23 15:21:00.085153] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:04.876 [2024-07-23 15:21:00.085489] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:27:04.876 [2024-07-23 15:21:00.085510] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:04.876 [2024-07-23 15:21:00.085606] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:27:04.876 [2024-07-23 15:21:00.086391] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:27:04.876 [2024-07-23 15:21:00.086419] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:27:04.876 BaseBdev3 00:27:04.876 [2024-07-23 15:21:00.086542] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.876 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:27:04.876 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:04.876 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:04.876 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:04.877 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:04.877 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:04.877 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:05.143 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:05.143 [ 00:27:05.143 { 00:27:05.143 "name": "BaseBdev3", 00:27:05.143 "aliases": [ 00:27:05.143 "82c7f887-f470-4b06-bd93-2dc93cdd1e44" 00:27:05.143 ], 00:27:05.143 "product_name": "Malloc disk", 00:27:05.143 "block_size": 512, 00:27:05.143 "num_blocks": 65536, 00:27:05.143 "uuid": "82c7f887-f470-4b06-bd93-2dc93cdd1e44", 00:27:05.143 "assigned_rate_limits": { 00:27:05.143 "rw_ios_per_sec": 0, 00:27:05.143 "rw_mbytes_per_sec": 0, 00:27:05.143 "r_mbytes_per_sec": 0, 00:27:05.143 "w_mbytes_per_sec": 0 00:27:05.143 }, 00:27:05.143 "claimed": true, 00:27:05.143 "claim_type": "exclusive_write", 00:27:05.143 "zoned": false, 00:27:05.143 "supported_io_types": { 00:27:05.143 "read": true, 00:27:05.143 "write": true, 00:27:05.143 "unmap": true, 00:27:05.143 "flush": true, 00:27:05.143 "reset": true, 00:27:05.143 "nvme_admin": false, 00:27:05.143 "nvme_io": false, 00:27:05.143 "nvme_io_md": false, 00:27:05.143 "write_zeroes": true, 00:27:05.143 "zcopy": true, 00:27:05.143 "get_zone_info": false, 00:27:05.143 "zone_management": false, 00:27:05.143 "zone_append": false, 00:27:05.143 "compare": false, 00:27:05.143 "compare_and_write": false, 00:27:05.143 "abort": true, 00:27:05.143 "seek_hole": false, 00:27:05.143 "seek_data": false, 00:27:05.143 "copy": true, 00:27:05.143 "nvme_iov_md": false 00:27:05.143 }, 00:27:05.143 "memory_domains": [ 00:27:05.143 { 00:27:05.143 "dma_device_id": "system", 00:27:05.144 "dma_device_type": 1 00:27:05.144 }, 00:27:05.144 { 00:27:05.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.144 "dma_device_type": 2 00:27:05.144 } 00:27:05.144 ], 00:27:05.144 "driver_specific": {} 00:27:05.144 } 00:27:05.144 ] 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.144 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.403 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.403 "name": "Existed_Raid", 00:27:05.403 "uuid": "2fea4bb7-af34-49ca-9681-218c7f7dca01", 00:27:05.403 "strip_size_kb": 64, 00:27:05.403 "state": "online", 00:27:05.403 "raid_level": "raid5f", 00:27:05.403 "superblock": true, 00:27:05.403 "num_base_bdevs": 3, 00:27:05.403 "num_base_bdevs_discovered": 3, 00:27:05.403 "num_base_bdevs_operational": 3, 00:27:05.403 "base_bdevs_list": [ 00:27:05.403 { 00:27:05.403 "name": "BaseBdev1", 00:27:05.403 "uuid": "9f5bf543-9b47-4df5-b327-9c70add67301", 00:27:05.403 "is_configured": true, 00:27:05.403 "data_offset": 2048, 00:27:05.403 "data_size": 63488 00:27:05.403 }, 00:27:05.403 { 00:27:05.403 "name": "BaseBdev2", 00:27:05.403 "uuid": "3232f2e5-18d6-4cf7-a615-d610ba481b79", 00:27:05.403 "is_configured": true, 00:27:05.403 "data_offset": 2048, 00:27:05.403 "data_size": 63488 00:27:05.403 }, 00:27:05.403 { 00:27:05.403 "name": "BaseBdev3", 00:27:05.403 "uuid": "82c7f887-f470-4b06-bd93-2dc93cdd1e44", 00:27:05.403 "is_configured": true, 00:27:05.403 "data_offset": 2048, 00:27:05.403 "data_size": 63488 00:27:05.403 } 00:27:05.403 ] 00:27:05.403 }' 00:27:05.403 15:21:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.403 15:21:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.660 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:05.660 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:05.660 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:05.660 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:05.660 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:05.660 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:27:05.660 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:05.660 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:05.918 [2024-07-23 15:21:01.173695] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:05.918 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:05.918 "name": "Existed_Raid", 00:27:05.918 "aliases": [ 00:27:05.918 "2fea4bb7-af34-49ca-9681-218c7f7dca01" 00:27:05.918 ], 00:27:05.918 "product_name": "Raid Volume", 00:27:05.918 "block_size": 512, 00:27:05.918 "num_blocks": 126976, 00:27:05.918 "uuid": "2fea4bb7-af34-49ca-9681-218c7f7dca01", 00:27:05.918 "assigned_rate_limits": { 00:27:05.918 "rw_ios_per_sec": 0, 00:27:05.918 "rw_mbytes_per_sec": 0, 00:27:05.918 "r_mbytes_per_sec": 0, 00:27:05.918 "w_mbytes_per_sec": 0 00:27:05.918 }, 00:27:05.918 "claimed": false, 00:27:05.918 "zoned": false, 00:27:05.918 "supported_io_types": { 00:27:05.918 "read": true, 00:27:05.918 "write": true, 00:27:05.918 "unmap": false, 00:27:05.918 "flush": false, 00:27:05.918 "reset": true, 00:27:05.918 "nvme_admin": false, 00:27:05.918 "nvme_io": false, 00:27:05.918 "nvme_io_md": false, 00:27:05.918 "write_zeroes": true, 00:27:05.918 "zcopy": false, 00:27:05.918 "get_zone_info": false, 00:27:05.918 "zone_management": false, 00:27:05.918 "zone_append": false, 00:27:05.918 "compare": false, 00:27:05.918 "compare_and_write": false, 00:27:05.918 "abort": false, 00:27:05.918 "seek_hole": false, 00:27:05.918 "seek_data": false, 00:27:05.918 "copy": false, 00:27:05.918 "nvme_iov_md": false 00:27:05.918 }, 00:27:05.918 "driver_specific": { 00:27:05.918 "raid": { 00:27:05.918 "uuid": "2fea4bb7-af34-49ca-9681-218c7f7dca01", 00:27:05.918 "strip_size_kb": 64, 00:27:05.918 "state": "online", 00:27:05.918 "raid_level": "raid5f", 00:27:05.918 "superblock": true, 00:27:05.918 "num_base_bdevs": 3, 00:27:05.918 "num_base_bdevs_discovered": 3, 00:27:05.918 "num_base_bdevs_operational": 3, 00:27:05.918 "base_bdevs_list": [ 00:27:05.918 { 00:27:05.918 "name": "BaseBdev1", 00:27:05.918 "uuid": "9f5bf543-9b47-4df5-b327-9c70add67301", 00:27:05.918 "is_configured": true, 00:27:05.918 "data_offset": 2048, 00:27:05.918 "data_size": 63488 00:27:05.918 }, 00:27:05.918 { 00:27:05.918 "name": "BaseBdev2", 00:27:05.918 "uuid": "3232f2e5-18d6-4cf7-a615-d610ba481b79", 00:27:05.918 "is_configured": true, 00:27:05.918 "data_offset": 2048, 00:27:05.918 "data_size": 63488 00:27:05.918 }, 00:27:05.918 { 00:27:05.918 "name": "BaseBdev3", 00:27:05.918 "uuid": "82c7f887-f470-4b06-bd93-2dc93cdd1e44", 00:27:05.918 "is_configured": true, 00:27:05.918 "data_offset": 2048, 00:27:05.918 "data_size": 63488 00:27:05.918 } 00:27:05.918 ] 00:27:05.918 } 00:27:05.918 } 00:27:05.918 }' 00:27:05.918 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:05.918 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:05.918 BaseBdev2 00:27:05.918 BaseBdev3' 00:27:05.918 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:05.918 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:05.918 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:06.177 "name": "BaseBdev1", 00:27:06.177 "aliases": [ 00:27:06.177 "9f5bf543-9b47-4df5-b327-9c70add67301" 00:27:06.177 ], 00:27:06.177 "product_name": "Malloc disk", 00:27:06.177 "block_size": 512, 00:27:06.177 "num_blocks": 65536, 00:27:06.177 "uuid": "9f5bf543-9b47-4df5-b327-9c70add67301", 00:27:06.177 "assigned_rate_limits": { 00:27:06.177 "rw_ios_per_sec": 0, 00:27:06.177 "rw_mbytes_per_sec": 0, 00:27:06.177 "r_mbytes_per_sec": 0, 00:27:06.177 "w_mbytes_per_sec": 0 00:27:06.177 }, 00:27:06.177 "claimed": true, 00:27:06.177 "claim_type": "exclusive_write", 00:27:06.177 "zoned": false, 00:27:06.177 "supported_io_types": { 00:27:06.177 "read": true, 00:27:06.177 "write": true, 00:27:06.177 "unmap": true, 00:27:06.177 "flush": true, 00:27:06.177 "reset": true, 00:27:06.177 "nvme_admin": false, 00:27:06.177 "nvme_io": false, 00:27:06.177 "nvme_io_md": false, 00:27:06.177 "write_zeroes": true, 00:27:06.177 "zcopy": true, 00:27:06.177 "get_zone_info": false, 00:27:06.177 "zone_management": false, 00:27:06.177 "zone_append": false, 00:27:06.177 "compare": false, 00:27:06.177 "compare_and_write": false, 00:27:06.177 "abort": true, 00:27:06.177 "seek_hole": false, 00:27:06.177 "seek_data": false, 00:27:06.177 "copy": true, 00:27:06.177 "nvme_iov_md": false 00:27:06.177 }, 00:27:06.177 "memory_domains": [ 00:27:06.177 { 00:27:06.177 "dma_device_id": "system", 00:27:06.177 "dma_device_type": 1 00:27:06.177 }, 00:27:06.177 { 00:27:06.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.177 "dma_device_type": 2 00:27:06.177 } 00:27:06.177 ], 00:27:06.177 "driver_specific": {} 00:27:06.177 }' 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:06.177 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:06.436 "name": "BaseBdev2", 00:27:06.436 "aliases": [ 00:27:06.436 "3232f2e5-18d6-4cf7-a615-d610ba481b79" 00:27:06.436 ], 00:27:06.436 "product_name": "Malloc disk", 00:27:06.436 "block_size": 512, 00:27:06.436 "num_blocks": 65536, 00:27:06.436 "uuid": "3232f2e5-18d6-4cf7-a615-d610ba481b79", 00:27:06.436 "assigned_rate_limits": { 00:27:06.436 "rw_ios_per_sec": 0, 00:27:06.436 "rw_mbytes_per_sec": 0, 00:27:06.436 "r_mbytes_per_sec": 0, 00:27:06.436 "w_mbytes_per_sec": 0 00:27:06.436 }, 00:27:06.436 "claimed": true, 00:27:06.436 "claim_type": "exclusive_write", 00:27:06.436 "zoned": false, 00:27:06.436 "supported_io_types": { 00:27:06.436 "read": true, 00:27:06.436 "write": true, 00:27:06.436 "unmap": true, 00:27:06.436 "flush": true, 00:27:06.436 "reset": true, 00:27:06.436 "nvme_admin": false, 00:27:06.436 "nvme_io": false, 00:27:06.436 "nvme_io_md": false, 00:27:06.436 "write_zeroes": true, 00:27:06.436 "zcopy": true, 00:27:06.436 "get_zone_info": false, 00:27:06.436 "zone_management": false, 00:27:06.436 "zone_append": false, 00:27:06.436 "compare": false, 00:27:06.436 "compare_and_write": false, 00:27:06.436 "abort": true, 00:27:06.436 "seek_hole": false, 00:27:06.436 "seek_data": false, 00:27:06.436 "copy": true, 00:27:06.436 "nvme_iov_md": false 00:27:06.436 }, 00:27:06.436 "memory_domains": [ 00:27:06.436 { 00:27:06.436 "dma_device_id": "system", 00:27:06.436 "dma_device_type": 1 00:27:06.436 }, 00:27:06.436 { 00:27:06.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.436 "dma_device_type": 2 00:27:06.436 } 00:27:06.436 ], 00:27:06.436 "driver_specific": {} 00:27:06.436 }' 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:06.436 15:21:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:06.695 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:06.695 "name": "BaseBdev3", 00:27:06.695 "aliases": [ 00:27:06.696 "82c7f887-f470-4b06-bd93-2dc93cdd1e44" 00:27:06.696 ], 00:27:06.696 "product_name": "Malloc disk", 00:27:06.696 "block_size": 512, 00:27:06.696 "num_blocks": 65536, 00:27:06.696 "uuid": "82c7f887-f470-4b06-bd93-2dc93cdd1e44", 00:27:06.696 "assigned_rate_limits": { 00:27:06.696 "rw_ios_per_sec": 0, 00:27:06.696 "rw_mbytes_per_sec": 0, 00:27:06.696 "r_mbytes_per_sec": 0, 00:27:06.696 "w_mbytes_per_sec": 0 00:27:06.696 }, 00:27:06.696 "claimed": true, 00:27:06.696 "claim_type": "exclusive_write", 00:27:06.696 "zoned": false, 00:27:06.696 "supported_io_types": { 00:27:06.696 "read": true, 00:27:06.696 "write": true, 00:27:06.696 "unmap": true, 00:27:06.696 "flush": true, 00:27:06.696 "reset": true, 00:27:06.696 "nvme_admin": false, 00:27:06.696 "nvme_io": false, 00:27:06.696 "nvme_io_md": false, 00:27:06.696 "write_zeroes": true, 00:27:06.696 "zcopy": true, 00:27:06.696 "get_zone_info": false, 00:27:06.696 "zone_management": false, 00:27:06.696 "zone_append": false, 00:27:06.696 "compare": false, 00:27:06.696 "compare_and_write": false, 00:27:06.696 "abort": true, 00:27:06.696 "seek_hole": false, 00:27:06.696 "seek_data": false, 00:27:06.696 "copy": true, 00:27:06.696 "nvme_iov_md": false 00:27:06.696 }, 00:27:06.696 "memory_domains": [ 00:27:06.696 { 00:27:06.696 "dma_device_id": "system", 00:27:06.696 "dma_device_type": 1 00:27:06.696 }, 00:27:06.696 { 00:27:06.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.696 "dma_device_type": 2 00:27:06.696 } 00:27:06.696 ], 00:27:06.696 "driver_specific": {} 00:27:06.696 }' 00:27:06.696 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:06.954 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:06.954 [2024-07-23 15:21:02.373836] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.213 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:07.472 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.472 "name": "Existed_Raid", 00:27:07.472 "uuid": "2fea4bb7-af34-49ca-9681-218c7f7dca01", 00:27:07.472 "strip_size_kb": 64, 00:27:07.472 "state": "online", 00:27:07.472 "raid_level": "raid5f", 00:27:07.472 "superblock": true, 00:27:07.472 "num_base_bdevs": 3, 00:27:07.472 "num_base_bdevs_discovered": 2, 00:27:07.472 "num_base_bdevs_operational": 2, 00:27:07.472 "base_bdevs_list": [ 00:27:07.472 { 00:27:07.472 "name": null, 00:27:07.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.472 "is_configured": false, 00:27:07.472 "data_offset": 2048, 00:27:07.472 "data_size": 63488 00:27:07.472 }, 00:27:07.472 { 00:27:07.472 "name": "BaseBdev2", 00:27:07.472 "uuid": "3232f2e5-18d6-4cf7-a615-d610ba481b79", 00:27:07.472 "is_configured": true, 00:27:07.472 "data_offset": 2048, 00:27:07.472 "data_size": 63488 00:27:07.472 }, 00:27:07.472 { 00:27:07.472 "name": "BaseBdev3", 00:27:07.472 "uuid": "82c7f887-f470-4b06-bd93-2dc93cdd1e44", 00:27:07.472 "is_configured": true, 00:27:07.472 "data_offset": 2048, 00:27:07.472 "data_size": 63488 00:27:07.472 } 00:27:07.472 ] 00:27:07.472 }' 00:27:07.472 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.472 15:21:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.731 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:07.731 15:21:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:07.731 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:07.731 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.990 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:07.990 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:07.990 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:08.247 [2024-07-23 15:21:03.486574] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:08.247 [2024-07-23 15:21:03.486729] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:08.247 [2024-07-23 15:21:03.498984] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:08.247 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:08.247 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:08.247 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.247 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:08.505 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:08.505 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:08.505 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:08.505 [2024-07-23 15:21:03.851171] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:08.505 [2024-07-23 15:21:03.851241] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:27:08.505 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:08.505 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:08.505 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.505 15:21:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:08.763 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:08.763 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:08.763 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:27:08.763 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:27:08.763 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:08.763 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:09.022 BaseBdev2 00:27:09.022 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:27:09.022 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:09.022 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:09.022 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:09.022 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:09.022 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:09.022 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:09.281 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:09.281 [ 00:27:09.281 { 00:27:09.281 "name": "BaseBdev2", 00:27:09.281 "aliases": [ 00:27:09.281 "131119a4-eaab-4fe8-9f39-6ba20109e683" 00:27:09.281 ], 00:27:09.281 "product_name": "Malloc disk", 00:27:09.281 "block_size": 512, 00:27:09.281 "num_blocks": 65536, 00:27:09.281 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:09.281 "assigned_rate_limits": { 00:27:09.281 "rw_ios_per_sec": 0, 00:27:09.281 "rw_mbytes_per_sec": 0, 00:27:09.281 "r_mbytes_per_sec": 0, 00:27:09.281 "w_mbytes_per_sec": 0 00:27:09.281 }, 00:27:09.281 "claimed": false, 00:27:09.281 "zoned": false, 00:27:09.281 "supported_io_types": { 00:27:09.281 "read": true, 00:27:09.281 "write": true, 00:27:09.281 "unmap": true, 00:27:09.281 "flush": true, 00:27:09.281 "reset": true, 00:27:09.281 "nvme_admin": false, 00:27:09.281 "nvme_io": false, 00:27:09.281 "nvme_io_md": false, 00:27:09.281 "write_zeroes": true, 00:27:09.281 "zcopy": true, 00:27:09.281 "get_zone_info": false, 00:27:09.281 "zone_management": false, 00:27:09.281 "zone_append": false, 00:27:09.281 "compare": false, 00:27:09.281 "compare_and_write": false, 00:27:09.281 "abort": true, 00:27:09.281 "seek_hole": false, 00:27:09.281 "seek_data": false, 00:27:09.281 "copy": true, 00:27:09.281 "nvme_iov_md": false 00:27:09.281 }, 00:27:09.281 "memory_domains": [ 00:27:09.281 { 00:27:09.281 "dma_device_id": "system", 00:27:09.281 "dma_device_type": 1 00:27:09.281 }, 00:27:09.281 { 00:27:09.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.281 "dma_device_type": 2 00:27:09.281 } 00:27:09.281 ], 00:27:09.281 "driver_specific": {} 00:27:09.281 } 00:27:09.281 ] 00:27:09.281 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:09.281 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:09.281 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:09.281 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:09.539 BaseBdev3 00:27:09.539 15:21:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:27:09.539 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:09.539 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:09.539 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:09.539 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:09.539 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:09.539 15:21:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:09.797 15:21:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:10.055 [ 00:27:10.055 { 00:27:10.055 "name": "BaseBdev3", 00:27:10.055 "aliases": [ 00:27:10.055 "fca2b2a4-486d-46e2-8897-6dbd79f10e17" 00:27:10.055 ], 00:27:10.055 "product_name": "Malloc disk", 00:27:10.055 "block_size": 512, 00:27:10.055 "num_blocks": 65536, 00:27:10.055 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:10.055 "assigned_rate_limits": { 00:27:10.055 "rw_ios_per_sec": 0, 00:27:10.055 "rw_mbytes_per_sec": 0, 00:27:10.055 "r_mbytes_per_sec": 0, 00:27:10.055 "w_mbytes_per_sec": 0 00:27:10.055 }, 00:27:10.055 "claimed": false, 00:27:10.055 "zoned": false, 00:27:10.055 "supported_io_types": { 00:27:10.055 "read": true, 00:27:10.055 "write": true, 00:27:10.055 "unmap": true, 00:27:10.055 "flush": true, 00:27:10.055 "reset": true, 00:27:10.055 "nvme_admin": false, 00:27:10.055 "nvme_io": false, 00:27:10.055 "nvme_io_md": false, 00:27:10.055 "write_zeroes": true, 00:27:10.055 "zcopy": true, 00:27:10.055 "get_zone_info": false, 00:27:10.055 "zone_management": false, 00:27:10.055 "zone_append": false, 00:27:10.055 "compare": false, 00:27:10.055 "compare_and_write": false, 00:27:10.055 "abort": true, 00:27:10.055 "seek_hole": false, 00:27:10.055 "seek_data": false, 00:27:10.055 "copy": true, 00:27:10.055 "nvme_iov_md": false 00:27:10.055 }, 00:27:10.055 "memory_domains": [ 00:27:10.055 { 00:27:10.055 "dma_device_id": "system", 00:27:10.055 "dma_device_type": 1 00:27:10.055 }, 00:27:10.055 { 00:27:10.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.055 "dma_device_type": 2 00:27:10.055 } 00:27:10.055 ], 00:27:10.055 "driver_specific": {} 00:27:10.055 } 00:27:10.055 ] 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:27:10.055 [2024-07-23 15:21:05.410862] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:10.055 [2024-07-23 15:21:05.410923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:10.055 [2024-07-23 15:21:05.410964] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:10.055 [2024-07-23 15:21:05.413192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:10.055 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.313 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:10.313 "name": "Existed_Raid", 00:27:10.313 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:10.313 "strip_size_kb": 64, 00:27:10.313 "state": "configuring", 00:27:10.313 "raid_level": "raid5f", 00:27:10.313 "superblock": true, 00:27:10.313 "num_base_bdevs": 3, 00:27:10.313 "num_base_bdevs_discovered": 2, 00:27:10.313 "num_base_bdevs_operational": 3, 00:27:10.313 "base_bdevs_list": [ 00:27:10.313 { 00:27:10.313 "name": "BaseBdev1", 00:27:10.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.313 "is_configured": false, 00:27:10.313 "data_offset": 0, 00:27:10.313 "data_size": 0 00:27:10.313 }, 00:27:10.313 { 00:27:10.313 "name": "BaseBdev2", 00:27:10.313 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:10.313 "is_configured": true, 00:27:10.313 "data_offset": 2048, 00:27:10.313 "data_size": 63488 00:27:10.313 }, 00:27:10.313 { 00:27:10.313 "name": "BaseBdev3", 00:27:10.313 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:10.313 "is_configured": true, 00:27:10.313 "data_offset": 2048, 00:27:10.313 "data_size": 63488 00:27:10.313 } 00:27:10.313 ] 00:27:10.313 }' 00:27:10.313 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:10.313 15:21:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.572 15:21:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:10.831 [2024-07-23 15:21:06.119011] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.831 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:11.092 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.092 "name": "Existed_Raid", 00:27:11.092 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:11.092 "strip_size_kb": 64, 00:27:11.092 "state": "configuring", 00:27:11.092 "raid_level": "raid5f", 00:27:11.092 "superblock": true, 00:27:11.092 "num_base_bdevs": 3, 00:27:11.092 "num_base_bdevs_discovered": 1, 00:27:11.092 "num_base_bdevs_operational": 3, 00:27:11.092 "base_bdevs_list": [ 00:27:11.092 { 00:27:11.092 "name": "BaseBdev1", 00:27:11.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.092 "is_configured": false, 00:27:11.092 "data_offset": 0, 00:27:11.092 "data_size": 0 00:27:11.092 }, 00:27:11.092 { 00:27:11.092 "name": null, 00:27:11.092 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:11.092 "is_configured": false, 00:27:11.092 "data_offset": 2048, 00:27:11.092 "data_size": 63488 00:27:11.092 }, 00:27:11.092 { 00:27:11.092 "name": "BaseBdev3", 00:27:11.092 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:11.092 "is_configured": true, 00:27:11.092 "data_offset": 2048, 00:27:11.092 "data_size": 63488 00:27:11.092 } 00:27:11.092 ] 00:27:11.092 }' 00:27:11.092 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.092 15:21:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.351 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.351 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:11.610 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:27:11.610 15:21:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:11.869 [2024-07-23 15:21:07.186452] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:11.869 BaseBdev1 00:27:11.869 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:27:11.869 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:11.869 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:11.869 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:11.869 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:11.869 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:11.869 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:12.138 [ 00:27:12.138 { 00:27:12.138 "name": "BaseBdev1", 00:27:12.138 "aliases": [ 00:27:12.138 "55a0607a-9e82-4222-8c0e-bec8cab8b5b7" 00:27:12.138 ], 00:27:12.138 "product_name": "Malloc disk", 00:27:12.138 "block_size": 512, 00:27:12.138 "num_blocks": 65536, 00:27:12.138 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:12.138 "assigned_rate_limits": { 00:27:12.138 "rw_ios_per_sec": 0, 00:27:12.138 "rw_mbytes_per_sec": 0, 00:27:12.138 "r_mbytes_per_sec": 0, 00:27:12.138 "w_mbytes_per_sec": 0 00:27:12.138 }, 00:27:12.138 "claimed": true, 00:27:12.138 "claim_type": "exclusive_write", 00:27:12.138 "zoned": false, 00:27:12.138 "supported_io_types": { 00:27:12.138 "read": true, 00:27:12.138 "write": true, 00:27:12.138 "unmap": true, 00:27:12.138 "flush": true, 00:27:12.138 "reset": true, 00:27:12.138 "nvme_admin": false, 00:27:12.138 "nvme_io": false, 00:27:12.138 "nvme_io_md": false, 00:27:12.138 "write_zeroes": true, 00:27:12.138 "zcopy": true, 00:27:12.138 "get_zone_info": false, 00:27:12.138 "zone_management": false, 00:27:12.138 "zone_append": false, 00:27:12.138 "compare": false, 00:27:12.138 "compare_and_write": false, 00:27:12.138 "abort": true, 00:27:12.138 "seek_hole": false, 00:27:12.138 "seek_data": false, 00:27:12.138 "copy": true, 00:27:12.138 "nvme_iov_md": false 00:27:12.138 }, 00:27:12.138 "memory_domains": [ 00:27:12.138 { 00:27:12.138 "dma_device_id": "system", 00:27:12.138 "dma_device_type": 1 00:27:12.138 }, 00:27:12.138 { 00:27:12.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:12.138 "dma_device_type": 2 00:27:12.138 } 00:27:12.138 ], 00:27:12.138 "driver_specific": {} 00:27:12.138 } 00:27:12.138 ] 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:12.138 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.396 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.396 "name": "Existed_Raid", 00:27:12.396 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:12.396 "strip_size_kb": 64, 00:27:12.396 "state": "configuring", 00:27:12.396 "raid_level": "raid5f", 00:27:12.396 "superblock": true, 00:27:12.396 "num_base_bdevs": 3, 00:27:12.396 "num_base_bdevs_discovered": 2, 00:27:12.396 "num_base_bdevs_operational": 3, 00:27:12.396 "base_bdevs_list": [ 00:27:12.396 { 00:27:12.396 "name": "BaseBdev1", 00:27:12.396 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:12.396 "is_configured": true, 00:27:12.396 "data_offset": 2048, 00:27:12.396 "data_size": 63488 00:27:12.396 }, 00:27:12.396 { 00:27:12.396 "name": null, 00:27:12.396 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:12.396 "is_configured": false, 00:27:12.396 "data_offset": 2048, 00:27:12.396 "data_size": 63488 00:27:12.396 }, 00:27:12.396 { 00:27:12.396 "name": "BaseBdev3", 00:27:12.396 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:12.396 "is_configured": true, 00:27:12.396 "data_offset": 2048, 00:27:12.396 "data_size": 63488 00:27:12.396 } 00:27:12.396 ] 00:27:12.396 }' 00:27:12.396 15:21:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.396 15:21:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.963 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.963 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:12.963 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:27:12.963 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:27:13.251 [2024-07-23 15:21:08.514903] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:13.251 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.510 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.510 "name": "Existed_Raid", 00:27:13.510 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:13.510 "strip_size_kb": 64, 00:27:13.510 "state": "configuring", 00:27:13.510 "raid_level": "raid5f", 00:27:13.510 "superblock": true, 00:27:13.510 "num_base_bdevs": 3, 00:27:13.510 "num_base_bdevs_discovered": 1, 00:27:13.510 "num_base_bdevs_operational": 3, 00:27:13.510 "base_bdevs_list": [ 00:27:13.510 { 00:27:13.510 "name": "BaseBdev1", 00:27:13.510 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:13.510 "is_configured": true, 00:27:13.510 "data_offset": 2048, 00:27:13.510 "data_size": 63488 00:27:13.510 }, 00:27:13.510 { 00:27:13.510 "name": null, 00:27:13.510 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:13.510 "is_configured": false, 00:27:13.510 "data_offset": 2048, 00:27:13.510 "data_size": 63488 00:27:13.510 }, 00:27:13.510 { 00:27:13.510 "name": null, 00:27:13.510 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:13.510 "is_configured": false, 00:27:13.510 "data_offset": 2048, 00:27:13.510 "data_size": 63488 00:27:13.510 } 00:27:13.510 ] 00:27:13.510 }' 00:27:13.510 15:21:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.510 15:21:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:13.769 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.769 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:14.028 [2024-07-23 15:21:09.379130] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.028 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:14.287 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:14.287 "name": "Existed_Raid", 00:27:14.287 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:14.287 "strip_size_kb": 64, 00:27:14.287 "state": "configuring", 00:27:14.287 "raid_level": "raid5f", 00:27:14.287 "superblock": true, 00:27:14.288 "num_base_bdevs": 3, 00:27:14.288 "num_base_bdevs_discovered": 2, 00:27:14.288 "num_base_bdevs_operational": 3, 00:27:14.288 "base_bdevs_list": [ 00:27:14.288 { 00:27:14.288 "name": "BaseBdev1", 00:27:14.288 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:14.288 "is_configured": true, 00:27:14.288 "data_offset": 2048, 00:27:14.288 "data_size": 63488 00:27:14.288 }, 00:27:14.288 { 00:27:14.288 "name": null, 00:27:14.288 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:14.288 "is_configured": false, 00:27:14.288 "data_offset": 2048, 00:27:14.288 "data_size": 63488 00:27:14.288 }, 00:27:14.288 { 00:27:14.288 "name": "BaseBdev3", 00:27:14.288 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:14.288 "is_configured": true, 00:27:14.288 "data_offset": 2048, 00:27:14.288 "data_size": 63488 00:27:14.288 } 00:27:14.288 ] 00:27:14.288 }' 00:27:14.288 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:14.288 15:21:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.546 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.546 15:21:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:14.804 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:27:14.804 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:15.063 [2024-07-23 15:21:10.255388] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.063 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.321 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.321 "name": "Existed_Raid", 00:27:15.321 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:15.321 "strip_size_kb": 64, 00:27:15.321 "state": "configuring", 00:27:15.321 "raid_level": "raid5f", 00:27:15.321 "superblock": true, 00:27:15.321 "num_base_bdevs": 3, 00:27:15.321 "num_base_bdevs_discovered": 1, 00:27:15.321 "num_base_bdevs_operational": 3, 00:27:15.321 "base_bdevs_list": [ 00:27:15.321 { 00:27:15.321 "name": null, 00:27:15.321 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:15.321 "is_configured": false, 00:27:15.321 "data_offset": 2048, 00:27:15.321 "data_size": 63488 00:27:15.321 }, 00:27:15.321 { 00:27:15.321 "name": null, 00:27:15.321 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:15.321 "is_configured": false, 00:27:15.321 "data_offset": 2048, 00:27:15.321 "data_size": 63488 00:27:15.321 }, 00:27:15.321 { 00:27:15.321 "name": "BaseBdev3", 00:27:15.321 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:15.321 "is_configured": true, 00:27:15.321 "data_offset": 2048, 00:27:15.321 "data_size": 63488 00:27:15.321 } 00:27:15.321 ] 00:27:15.321 }' 00:27:15.321 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.321 15:21:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:15.579 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.580 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:15.580 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:27:15.580 15:21:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:15.838 [2024-07-23 15:21:11.151818] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.838 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.097 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.097 "name": "Existed_Raid", 00:27:16.097 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:16.097 "strip_size_kb": 64, 00:27:16.097 "state": "configuring", 00:27:16.097 "raid_level": "raid5f", 00:27:16.097 "superblock": true, 00:27:16.097 "num_base_bdevs": 3, 00:27:16.097 "num_base_bdevs_discovered": 2, 00:27:16.097 "num_base_bdevs_operational": 3, 00:27:16.097 "base_bdevs_list": [ 00:27:16.097 { 00:27:16.097 "name": null, 00:27:16.097 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:16.097 "is_configured": false, 00:27:16.097 "data_offset": 2048, 00:27:16.097 "data_size": 63488 00:27:16.097 }, 00:27:16.097 { 00:27:16.097 "name": "BaseBdev2", 00:27:16.097 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:16.097 "is_configured": true, 00:27:16.097 "data_offset": 2048, 00:27:16.097 "data_size": 63488 00:27:16.097 }, 00:27:16.097 { 00:27:16.097 "name": "BaseBdev3", 00:27:16.097 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:16.097 "is_configured": true, 00:27:16.097 "data_offset": 2048, 00:27:16.097 "data_size": 63488 00:27:16.097 } 00:27:16.097 ] 00:27:16.097 }' 00:27:16.097 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.097 15:21:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.356 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.356 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:16.615 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:27:16.615 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.615 15:21:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:16.874 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 55a0607a-9e82-4222-8c0e-bec8cab8b5b7 00:27:16.874 [2024-07-23 15:21:12.283319] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:16.874 [2024-07-23 15:21:12.283500] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:27:16.874 [2024-07-23 15:21:12.283519] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:16.874 [2024-07-23 15:21:12.283582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:27:16.874 NewBaseBdev 00:27:16.874 [2024-07-23 15:21:12.284181] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:27:16.874 [2024-07-23 15:21:12.284197] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007880 00:27:16.874 [2024-07-23 15:21:12.284293] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:16.874 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:27:16.874 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:27:16.874 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:16.874 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:27:16.874 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:16.874 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:16.874 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:17.134 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:17.393 [ 00:27:17.393 { 00:27:17.393 "name": "NewBaseBdev", 00:27:17.393 "aliases": [ 00:27:17.393 "55a0607a-9e82-4222-8c0e-bec8cab8b5b7" 00:27:17.393 ], 00:27:17.393 "product_name": "Malloc disk", 00:27:17.393 "block_size": 512, 00:27:17.393 "num_blocks": 65536, 00:27:17.393 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:17.393 "assigned_rate_limits": { 00:27:17.393 "rw_ios_per_sec": 0, 00:27:17.393 "rw_mbytes_per_sec": 0, 00:27:17.393 "r_mbytes_per_sec": 0, 00:27:17.393 "w_mbytes_per_sec": 0 00:27:17.393 }, 00:27:17.393 "claimed": true, 00:27:17.393 "claim_type": "exclusive_write", 00:27:17.393 "zoned": false, 00:27:17.393 "supported_io_types": { 00:27:17.393 "read": true, 00:27:17.393 "write": true, 00:27:17.393 "unmap": true, 00:27:17.393 "flush": true, 00:27:17.393 "reset": true, 00:27:17.393 "nvme_admin": false, 00:27:17.393 "nvme_io": false, 00:27:17.393 "nvme_io_md": false, 00:27:17.393 "write_zeroes": true, 00:27:17.393 "zcopy": true, 00:27:17.393 "get_zone_info": false, 00:27:17.393 "zone_management": false, 00:27:17.393 "zone_append": false, 00:27:17.393 "compare": false, 00:27:17.393 "compare_and_write": false, 00:27:17.393 "abort": true, 00:27:17.393 "seek_hole": false, 00:27:17.393 "seek_data": false, 00:27:17.393 "copy": true, 00:27:17.393 "nvme_iov_md": false 00:27:17.393 }, 00:27:17.393 "memory_domains": [ 00:27:17.393 { 00:27:17.393 "dma_device_id": "system", 00:27:17.393 "dma_device_type": 1 00:27:17.393 }, 00:27:17.393 { 00:27:17.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.393 "dma_device_type": 2 00:27:17.393 } 00:27:17.393 ], 00:27:17.393 "driver_specific": {} 00:27:17.393 } 00:27:17.393 ] 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.393 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:17.652 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:17.652 "name": "Existed_Raid", 00:27:17.652 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:17.652 "strip_size_kb": 64, 00:27:17.652 "state": "online", 00:27:17.652 "raid_level": "raid5f", 00:27:17.652 "superblock": true, 00:27:17.652 "num_base_bdevs": 3, 00:27:17.652 "num_base_bdevs_discovered": 3, 00:27:17.652 "num_base_bdevs_operational": 3, 00:27:17.652 "base_bdevs_list": [ 00:27:17.653 { 00:27:17.653 "name": "NewBaseBdev", 00:27:17.653 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:17.653 "is_configured": true, 00:27:17.653 "data_offset": 2048, 00:27:17.653 "data_size": 63488 00:27:17.653 }, 00:27:17.653 { 00:27:17.653 "name": "BaseBdev2", 00:27:17.653 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:17.653 "is_configured": true, 00:27:17.653 "data_offset": 2048, 00:27:17.653 "data_size": 63488 00:27:17.653 }, 00:27:17.653 { 00:27:17.653 "name": "BaseBdev3", 00:27:17.653 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:17.653 "is_configured": true, 00:27:17.653 "data_offset": 2048, 00:27:17.653 "data_size": 63488 00:27:17.653 } 00:27:17.653 ] 00:27:17.653 }' 00:27:17.653 15:21:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:17.653 15:21:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:17.912 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:27:17.912 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:17.912 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:17.912 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:17.912 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:17.912 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:27:17.912 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:17.912 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:18.172 [2024-07-23 15:21:13.343882] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:18.172 "name": "Existed_Raid", 00:27:18.172 "aliases": [ 00:27:18.172 "18428c4e-5c08-4aaa-a6ae-8401d5fde48f" 00:27:18.172 ], 00:27:18.172 "product_name": "Raid Volume", 00:27:18.172 "block_size": 512, 00:27:18.172 "num_blocks": 126976, 00:27:18.172 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:18.172 "assigned_rate_limits": { 00:27:18.172 "rw_ios_per_sec": 0, 00:27:18.172 "rw_mbytes_per_sec": 0, 00:27:18.172 "r_mbytes_per_sec": 0, 00:27:18.172 "w_mbytes_per_sec": 0 00:27:18.172 }, 00:27:18.172 "claimed": false, 00:27:18.172 "zoned": false, 00:27:18.172 "supported_io_types": { 00:27:18.172 "read": true, 00:27:18.172 "write": true, 00:27:18.172 "unmap": false, 00:27:18.172 "flush": false, 00:27:18.172 "reset": true, 00:27:18.172 "nvme_admin": false, 00:27:18.172 "nvme_io": false, 00:27:18.172 "nvme_io_md": false, 00:27:18.172 "write_zeroes": true, 00:27:18.172 "zcopy": false, 00:27:18.172 "get_zone_info": false, 00:27:18.172 "zone_management": false, 00:27:18.172 "zone_append": false, 00:27:18.172 "compare": false, 00:27:18.172 "compare_and_write": false, 00:27:18.172 "abort": false, 00:27:18.172 "seek_hole": false, 00:27:18.172 "seek_data": false, 00:27:18.172 "copy": false, 00:27:18.172 "nvme_iov_md": false 00:27:18.172 }, 00:27:18.172 "driver_specific": { 00:27:18.172 "raid": { 00:27:18.172 "uuid": "18428c4e-5c08-4aaa-a6ae-8401d5fde48f", 00:27:18.172 "strip_size_kb": 64, 00:27:18.172 "state": "online", 00:27:18.172 "raid_level": "raid5f", 00:27:18.172 "superblock": true, 00:27:18.172 "num_base_bdevs": 3, 00:27:18.172 "num_base_bdevs_discovered": 3, 00:27:18.172 "num_base_bdevs_operational": 3, 00:27:18.172 "base_bdevs_list": [ 00:27:18.172 { 00:27:18.172 "name": "NewBaseBdev", 00:27:18.172 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:18.172 "is_configured": true, 00:27:18.172 "data_offset": 2048, 00:27:18.172 "data_size": 63488 00:27:18.172 }, 00:27:18.172 { 00:27:18.172 "name": "BaseBdev2", 00:27:18.172 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:18.172 "is_configured": true, 00:27:18.172 "data_offset": 2048, 00:27:18.172 "data_size": 63488 00:27:18.172 }, 00:27:18.172 { 00:27:18.172 "name": "BaseBdev3", 00:27:18.172 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:18.172 "is_configured": true, 00:27:18.172 "data_offset": 2048, 00:27:18.172 "data_size": 63488 00:27:18.172 } 00:27:18.172 ] 00:27:18.172 } 00:27:18.172 } 00:27:18.172 }' 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:27:18.172 BaseBdev2 00:27:18.172 BaseBdev3' 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:18.172 "name": "NewBaseBdev", 00:27:18.172 "aliases": [ 00:27:18.172 "55a0607a-9e82-4222-8c0e-bec8cab8b5b7" 00:27:18.172 ], 00:27:18.172 "product_name": "Malloc disk", 00:27:18.172 "block_size": 512, 00:27:18.172 "num_blocks": 65536, 00:27:18.172 "uuid": "55a0607a-9e82-4222-8c0e-bec8cab8b5b7", 00:27:18.172 "assigned_rate_limits": { 00:27:18.172 "rw_ios_per_sec": 0, 00:27:18.172 "rw_mbytes_per_sec": 0, 00:27:18.172 "r_mbytes_per_sec": 0, 00:27:18.172 "w_mbytes_per_sec": 0 00:27:18.172 }, 00:27:18.172 "claimed": true, 00:27:18.172 "claim_type": "exclusive_write", 00:27:18.172 "zoned": false, 00:27:18.172 "supported_io_types": { 00:27:18.172 "read": true, 00:27:18.172 "write": true, 00:27:18.172 "unmap": true, 00:27:18.172 "flush": true, 00:27:18.172 "reset": true, 00:27:18.172 "nvme_admin": false, 00:27:18.172 "nvme_io": false, 00:27:18.172 "nvme_io_md": false, 00:27:18.172 "write_zeroes": true, 00:27:18.172 "zcopy": true, 00:27:18.172 "get_zone_info": false, 00:27:18.172 "zone_management": false, 00:27:18.172 "zone_append": false, 00:27:18.172 "compare": false, 00:27:18.172 "compare_and_write": false, 00:27:18.172 "abort": true, 00:27:18.172 "seek_hole": false, 00:27:18.172 "seek_data": false, 00:27:18.172 "copy": true, 00:27:18.172 "nvme_iov_md": false 00:27:18.172 }, 00:27:18.172 "memory_domains": [ 00:27:18.172 { 00:27:18.172 "dma_device_id": "system", 00:27:18.172 "dma_device_type": 1 00:27:18.172 }, 00:27:18.172 { 00:27:18.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.172 "dma_device_type": 2 00:27:18.172 } 00:27:18.172 ], 00:27:18.172 "driver_specific": {} 00:27:18.172 }' 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.172 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:18.430 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:18.689 "name": "BaseBdev2", 00:27:18.689 "aliases": [ 00:27:18.689 "131119a4-eaab-4fe8-9f39-6ba20109e683" 00:27:18.689 ], 00:27:18.689 "product_name": "Malloc disk", 00:27:18.689 "block_size": 512, 00:27:18.689 "num_blocks": 65536, 00:27:18.689 "uuid": "131119a4-eaab-4fe8-9f39-6ba20109e683", 00:27:18.689 "assigned_rate_limits": { 00:27:18.689 "rw_ios_per_sec": 0, 00:27:18.689 "rw_mbytes_per_sec": 0, 00:27:18.689 "r_mbytes_per_sec": 0, 00:27:18.689 "w_mbytes_per_sec": 0 00:27:18.689 }, 00:27:18.689 "claimed": true, 00:27:18.689 "claim_type": "exclusive_write", 00:27:18.689 "zoned": false, 00:27:18.689 "supported_io_types": { 00:27:18.689 "read": true, 00:27:18.689 "write": true, 00:27:18.689 "unmap": true, 00:27:18.689 "flush": true, 00:27:18.689 "reset": true, 00:27:18.689 "nvme_admin": false, 00:27:18.689 "nvme_io": false, 00:27:18.689 "nvme_io_md": false, 00:27:18.689 "write_zeroes": true, 00:27:18.689 "zcopy": true, 00:27:18.689 "get_zone_info": false, 00:27:18.689 "zone_management": false, 00:27:18.689 "zone_append": false, 00:27:18.689 "compare": false, 00:27:18.689 "compare_and_write": false, 00:27:18.689 "abort": true, 00:27:18.689 "seek_hole": false, 00:27:18.689 "seek_data": false, 00:27:18.689 "copy": true, 00:27:18.689 "nvme_iov_md": false 00:27:18.689 }, 00:27:18.689 "memory_domains": [ 00:27:18.689 { 00:27:18.689 "dma_device_id": "system", 00:27:18.689 "dma_device_type": 1 00:27:18.689 }, 00:27:18.689 { 00:27:18.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.689 "dma_device_type": 2 00:27:18.689 } 00:27:18.689 ], 00:27:18.689 "driver_specific": {} 00:27:18.689 }' 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:18.689 15:21:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:18.689 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:18.689 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:18.948 "name": "BaseBdev3", 00:27:18.948 "aliases": [ 00:27:18.948 "fca2b2a4-486d-46e2-8897-6dbd79f10e17" 00:27:18.948 ], 00:27:18.948 "product_name": "Malloc disk", 00:27:18.948 "block_size": 512, 00:27:18.948 "num_blocks": 65536, 00:27:18.948 "uuid": "fca2b2a4-486d-46e2-8897-6dbd79f10e17", 00:27:18.948 "assigned_rate_limits": { 00:27:18.948 "rw_ios_per_sec": 0, 00:27:18.948 "rw_mbytes_per_sec": 0, 00:27:18.948 "r_mbytes_per_sec": 0, 00:27:18.948 "w_mbytes_per_sec": 0 00:27:18.948 }, 00:27:18.948 "claimed": true, 00:27:18.948 "claim_type": "exclusive_write", 00:27:18.948 "zoned": false, 00:27:18.948 "supported_io_types": { 00:27:18.948 "read": true, 00:27:18.948 "write": true, 00:27:18.948 "unmap": true, 00:27:18.948 "flush": true, 00:27:18.948 "reset": true, 00:27:18.948 "nvme_admin": false, 00:27:18.948 "nvme_io": false, 00:27:18.948 "nvme_io_md": false, 00:27:18.948 "write_zeroes": true, 00:27:18.948 "zcopy": true, 00:27:18.948 "get_zone_info": false, 00:27:18.948 "zone_management": false, 00:27:18.948 "zone_append": false, 00:27:18.948 "compare": false, 00:27:18.948 "compare_and_write": false, 00:27:18.948 "abort": true, 00:27:18.948 "seek_hole": false, 00:27:18.948 "seek_data": false, 00:27:18.948 "copy": true, 00:27:18.948 "nvme_iov_md": false 00:27:18.948 }, 00:27:18.948 "memory_domains": [ 00:27:18.948 { 00:27:18.948 "dma_device_id": "system", 00:27:18.948 "dma_device_type": 1 00:27:18.948 }, 00:27:18.948 { 00:27:18.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.948 "dma_device_type": 2 00:27:18.948 } 00:27:18.948 ], 00:27:18.948 "driver_specific": {} 00:27:18.948 }' 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:18.948 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:19.207 [2024-07-23 15:21:14.603960] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:19.207 [2024-07-23 15:21:14.604005] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:19.207 [2024-07-23 15:21:14.604086] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.207 [2024-07-23 15:21:14.604340] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.207 [2024-07-23 15:21:14.604358] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name Existed_Raid, state offline 00:27:19.207 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 113363 00:27:19.207 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 113363 ']' 00:27:19.207 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 113363 00:27:19.207 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:27:19.207 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:19.465 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113363 00:27:19.465 killing process with pid 113363 00:27:19.465 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:19.465 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:19.465 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113363' 00:27:19.465 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 113363 00:27:19.465 [2024-07-23 15:21:14.664827] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:19.465 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 113363 00:27:19.465 [2024-07-23 15:21:14.700027] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.724 15:21:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:27:19.724 00:27:19.724 real 0m19.835s 00:27:19.724 user 0m34.815s 00:27:19.724 sys 0m4.509s 00:27:19.724 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.724 15:21:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:19.724 ************************************ 00:27:19.724 END TEST raid5f_state_function_test_sb 00:27:19.724 ************************************ 00:27:19.724 15:21:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:19.724 15:21:15 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:27:19.724 15:21:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:19.724 15:21:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.724 15:21:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:19.724 ************************************ 00:27:19.724 START TEST raid5f_superblock_test 00:27:19.724 ************************************ 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 3 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=114199 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 114199 /var/tmp/spdk-raid.sock 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 114199 ']' 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:19.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:19.724 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.724 [2024-07-23 15:21:15.088919] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:27:19.724 [2024-07-23 15:21:15.089134] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114199 ] 00:27:19.982 [2024-07-23 15:21:15.239797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.982 [2024-07-23 15:21:15.286069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.982 [2024-07-23 15:21:15.330520] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:20.548 15:21:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:20.807 malloc1 00:27:20.807 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:21.065 [2024-07-23 15:21:16.381493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:21.065 [2024-07-23 15:21:16.381768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.065 [2024-07-23 15:21:16.381859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:27:21.065 [2024-07-23 15:21:16.381960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.065 [2024-07-23 15:21:16.384739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.065 [2024-07-23 15:21:16.384918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:21.065 pt1 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:21.065 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:21.323 malloc2 00:27:21.323 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:21.323 [2024-07-23 15:21:16.751051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:21.323 [2024-07-23 15:21:16.751360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.323 [2024-07-23 15:21:16.751408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:27:21.323 [2024-07-23 15:21:16.751426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.323 [2024-07-23 15:21:16.753991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.323 [2024-07-23 15:21:16.754164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:21.582 pt2 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:21.583 malloc3 00:27:21.583 15:21:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:21.842 [2024-07-23 15:21:17.123267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:21.842 [2024-07-23 15:21:17.123514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.842 [2024-07-23 15:21:17.123572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:27:21.842 [2024-07-23 15:21:17.123704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.842 [2024-07-23 15:21:17.126180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.842 [2024-07-23 15:21:17.126324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:21.842 pt3 00:27:21.842 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:21.842 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:21.842 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:27:22.101 [2024-07-23 15:21:17.311351] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:22.101 [2024-07-23 15:21:17.313761] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:22.101 [2024-07-23 15:21:17.313978] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:22.101 [2024-07-23 15:21:17.314191] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007880 00:27:22.101 [2024-07-23 15:21:17.314210] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:22.101 [2024-07-23 15:21:17.314358] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:27:22.101 [2024-07-23 15:21:17.315013] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007880 00:27:22.101 [2024-07-23 15:21:17.315031] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007880 00:27:22.101 [2024-07-23 15:21:17.315195] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.101 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.101 "name": "raid_bdev1", 00:27:22.101 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:22.101 "strip_size_kb": 64, 00:27:22.101 "state": "online", 00:27:22.102 "raid_level": "raid5f", 00:27:22.102 "superblock": true, 00:27:22.102 "num_base_bdevs": 3, 00:27:22.102 "num_base_bdevs_discovered": 3, 00:27:22.102 "num_base_bdevs_operational": 3, 00:27:22.102 "base_bdevs_list": [ 00:27:22.102 { 00:27:22.102 "name": "pt1", 00:27:22.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:22.102 "is_configured": true, 00:27:22.102 "data_offset": 2048, 00:27:22.102 "data_size": 63488 00:27:22.102 }, 00:27:22.102 { 00:27:22.102 "name": "pt2", 00:27:22.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:22.102 "is_configured": true, 00:27:22.102 "data_offset": 2048, 00:27:22.102 "data_size": 63488 00:27:22.102 }, 00:27:22.102 { 00:27:22.102 "name": "pt3", 00:27:22.102 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:22.102 "is_configured": true, 00:27:22.102 "data_offset": 2048, 00:27:22.102 "data_size": 63488 00:27:22.102 } 00:27:22.102 ] 00:27:22.102 }' 00:27:22.102 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.102 15:21:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.360 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:22.360 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:22.360 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:22.360 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:22.360 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:22.360 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:22.360 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:22.360 15:21:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:22.619 [2024-07-23 15:21:18.011647] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:22.619 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:22.619 "name": "raid_bdev1", 00:27:22.619 "aliases": [ 00:27:22.619 "d65a548e-c92f-4104-a0f1-6cd57f79e6c4" 00:27:22.619 ], 00:27:22.619 "product_name": "Raid Volume", 00:27:22.619 "block_size": 512, 00:27:22.619 "num_blocks": 126976, 00:27:22.619 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:22.619 "assigned_rate_limits": { 00:27:22.619 "rw_ios_per_sec": 0, 00:27:22.619 "rw_mbytes_per_sec": 0, 00:27:22.619 "r_mbytes_per_sec": 0, 00:27:22.619 "w_mbytes_per_sec": 0 00:27:22.619 }, 00:27:22.619 "claimed": false, 00:27:22.619 "zoned": false, 00:27:22.619 "supported_io_types": { 00:27:22.619 "read": true, 00:27:22.619 "write": true, 00:27:22.619 "unmap": false, 00:27:22.619 "flush": false, 00:27:22.619 "reset": true, 00:27:22.619 "nvme_admin": false, 00:27:22.619 "nvme_io": false, 00:27:22.619 "nvme_io_md": false, 00:27:22.619 "write_zeroes": true, 00:27:22.619 "zcopy": false, 00:27:22.619 "get_zone_info": false, 00:27:22.619 "zone_management": false, 00:27:22.620 "zone_append": false, 00:27:22.620 "compare": false, 00:27:22.620 "compare_and_write": false, 00:27:22.620 "abort": false, 00:27:22.620 "seek_hole": false, 00:27:22.620 "seek_data": false, 00:27:22.620 "copy": false, 00:27:22.620 "nvme_iov_md": false 00:27:22.620 }, 00:27:22.620 "driver_specific": { 00:27:22.620 "raid": { 00:27:22.620 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:22.620 "strip_size_kb": 64, 00:27:22.620 "state": "online", 00:27:22.620 "raid_level": "raid5f", 00:27:22.620 "superblock": true, 00:27:22.620 "num_base_bdevs": 3, 00:27:22.620 "num_base_bdevs_discovered": 3, 00:27:22.620 "num_base_bdevs_operational": 3, 00:27:22.620 "base_bdevs_list": [ 00:27:22.620 { 00:27:22.620 "name": "pt1", 00:27:22.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:22.620 "is_configured": true, 00:27:22.620 "data_offset": 2048, 00:27:22.620 "data_size": 63488 00:27:22.620 }, 00:27:22.620 { 00:27:22.620 "name": "pt2", 00:27:22.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:22.620 "is_configured": true, 00:27:22.620 "data_offset": 2048, 00:27:22.620 "data_size": 63488 00:27:22.620 }, 00:27:22.620 { 00:27:22.620 "name": "pt3", 00:27:22.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:22.620 "is_configured": true, 00:27:22.620 "data_offset": 2048, 00:27:22.620 "data_size": 63488 00:27:22.620 } 00:27:22.620 ] 00:27:22.620 } 00:27:22.620 } 00:27:22.620 }' 00:27:22.620 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:22.620 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:22.620 pt2 00:27:22.620 pt3' 00:27:22.620 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:22.879 "name": "pt1", 00:27:22.879 "aliases": [ 00:27:22.879 "00000000-0000-0000-0000-000000000001" 00:27:22.879 ], 00:27:22.879 "product_name": "passthru", 00:27:22.879 "block_size": 512, 00:27:22.879 "num_blocks": 65536, 00:27:22.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:22.879 "assigned_rate_limits": { 00:27:22.879 "rw_ios_per_sec": 0, 00:27:22.879 "rw_mbytes_per_sec": 0, 00:27:22.879 "r_mbytes_per_sec": 0, 00:27:22.879 "w_mbytes_per_sec": 0 00:27:22.879 }, 00:27:22.879 "claimed": true, 00:27:22.879 "claim_type": "exclusive_write", 00:27:22.879 "zoned": false, 00:27:22.879 "supported_io_types": { 00:27:22.879 "read": true, 00:27:22.879 "write": true, 00:27:22.879 "unmap": true, 00:27:22.879 "flush": true, 00:27:22.879 "reset": true, 00:27:22.879 "nvme_admin": false, 00:27:22.879 "nvme_io": false, 00:27:22.879 "nvme_io_md": false, 00:27:22.879 "write_zeroes": true, 00:27:22.879 "zcopy": true, 00:27:22.879 "get_zone_info": false, 00:27:22.879 "zone_management": false, 00:27:22.879 "zone_append": false, 00:27:22.879 "compare": false, 00:27:22.879 "compare_and_write": false, 00:27:22.879 "abort": true, 00:27:22.879 "seek_hole": false, 00:27:22.879 "seek_data": false, 00:27:22.879 "copy": true, 00:27:22.879 "nvme_iov_md": false 00:27:22.879 }, 00:27:22.879 "memory_domains": [ 00:27:22.879 { 00:27:22.879 "dma_device_id": "system", 00:27:22.879 "dma_device_type": 1 00:27:22.879 }, 00:27:22.879 { 00:27:22.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.879 "dma_device_type": 2 00:27:22.879 } 00:27:22.879 ], 00:27:22.879 "driver_specific": { 00:27:22.879 "passthru": { 00:27:22.879 "name": "pt1", 00:27:22.879 "base_bdev_name": "malloc1" 00:27:22.879 } 00:27:22.879 } 00:27:22.879 }' 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:22.879 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:23.139 "name": "pt2", 00:27:23.139 "aliases": [ 00:27:23.139 "00000000-0000-0000-0000-000000000002" 00:27:23.139 ], 00:27:23.139 "product_name": "passthru", 00:27:23.139 "block_size": 512, 00:27:23.139 "num_blocks": 65536, 00:27:23.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:23.139 "assigned_rate_limits": { 00:27:23.139 "rw_ios_per_sec": 0, 00:27:23.139 "rw_mbytes_per_sec": 0, 00:27:23.139 "r_mbytes_per_sec": 0, 00:27:23.139 "w_mbytes_per_sec": 0 00:27:23.139 }, 00:27:23.139 "claimed": true, 00:27:23.139 "claim_type": "exclusive_write", 00:27:23.139 "zoned": false, 00:27:23.139 "supported_io_types": { 00:27:23.139 "read": true, 00:27:23.139 "write": true, 00:27:23.139 "unmap": true, 00:27:23.139 "flush": true, 00:27:23.139 "reset": true, 00:27:23.139 "nvme_admin": false, 00:27:23.139 "nvme_io": false, 00:27:23.139 "nvme_io_md": false, 00:27:23.139 "write_zeroes": true, 00:27:23.139 "zcopy": true, 00:27:23.139 "get_zone_info": false, 00:27:23.139 "zone_management": false, 00:27:23.139 "zone_append": false, 00:27:23.139 "compare": false, 00:27:23.139 "compare_and_write": false, 00:27:23.139 "abort": true, 00:27:23.139 "seek_hole": false, 00:27:23.139 "seek_data": false, 00:27:23.139 "copy": true, 00:27:23.139 "nvme_iov_md": false 00:27:23.139 }, 00:27:23.139 "memory_domains": [ 00:27:23.139 { 00:27:23.139 "dma_device_id": "system", 00:27:23.139 "dma_device_type": 1 00:27:23.139 }, 00:27:23.139 { 00:27:23.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.139 "dma_device_type": 2 00:27:23.139 } 00:27:23.139 ], 00:27:23.139 "driver_specific": { 00:27:23.139 "passthru": { 00:27:23.139 "name": "pt2", 00:27:23.139 "base_bdev_name": "malloc2" 00:27:23.139 } 00:27:23.139 } 00:27:23.139 }' 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:23.139 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.399 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.399 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:23.399 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.399 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.399 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:23.399 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:23.399 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:23.399 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:23.658 "name": "pt3", 00:27:23.658 "aliases": [ 00:27:23.658 "00000000-0000-0000-0000-000000000003" 00:27:23.658 ], 00:27:23.658 "product_name": "passthru", 00:27:23.658 "block_size": 512, 00:27:23.658 "num_blocks": 65536, 00:27:23.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:23.658 "assigned_rate_limits": { 00:27:23.658 "rw_ios_per_sec": 0, 00:27:23.658 "rw_mbytes_per_sec": 0, 00:27:23.658 "r_mbytes_per_sec": 0, 00:27:23.658 "w_mbytes_per_sec": 0 00:27:23.658 }, 00:27:23.658 "claimed": true, 00:27:23.658 "claim_type": "exclusive_write", 00:27:23.658 "zoned": false, 00:27:23.658 "supported_io_types": { 00:27:23.658 "read": true, 00:27:23.658 "write": true, 00:27:23.658 "unmap": true, 00:27:23.658 "flush": true, 00:27:23.658 "reset": true, 00:27:23.658 "nvme_admin": false, 00:27:23.658 "nvme_io": false, 00:27:23.658 "nvme_io_md": false, 00:27:23.658 "write_zeroes": true, 00:27:23.658 "zcopy": true, 00:27:23.658 "get_zone_info": false, 00:27:23.658 "zone_management": false, 00:27:23.658 "zone_append": false, 00:27:23.658 "compare": false, 00:27:23.658 "compare_and_write": false, 00:27:23.658 "abort": true, 00:27:23.658 "seek_hole": false, 00:27:23.658 "seek_data": false, 00:27:23.658 "copy": true, 00:27:23.658 "nvme_iov_md": false 00:27:23.658 }, 00:27:23.658 "memory_domains": [ 00:27:23.658 { 00:27:23.658 "dma_device_id": "system", 00:27:23.658 "dma_device_type": 1 00:27:23.658 }, 00:27:23.658 { 00:27:23.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:23.658 "dma_device_type": 2 00:27:23.658 } 00:27:23.658 ], 00:27:23.658 "driver_specific": { 00:27:23.658 "passthru": { 00:27:23.658 "name": "pt3", 00:27:23.658 "base_bdev_name": "malloc3" 00:27:23.658 } 00:27:23.658 } 00:27:23.658 }' 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:23.658 15:21:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:23.917 [2024-07-23 15:21:19.203916] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:23.917 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d65a548e-c92f-4104-a0f1-6cd57f79e6c4 00:27:23.917 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d65a548e-c92f-4104-a0f1-6cd57f79e6c4 ']' 00:27:23.917 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:24.177 [2024-07-23 15:21:19.455757] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:24.177 [2024-07-23 15:21:19.456006] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:24.177 [2024-07-23 15:21:19.456130] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:24.177 [2024-07-23 15:21:19.456210] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:24.177 [2024-07-23 15:21:19.456226] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007880 name raid_bdev1, state offline 00:27:24.177 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:24.177 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.436 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:24.436 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:24.436 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:24.436 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:24.695 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:24.695 15:21:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:24.695 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:24.695 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:24.954 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:24.954 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:25.213 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:27:25.472 [2024-07-23 15:21:20.668043] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:25.472 [2024-07-23 15:21:20.670249] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:25.472 [2024-07-23 15:21:20.670298] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:25.472 [2024-07-23 15:21:20.670347] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:25.472 [2024-07-23 15:21:20.670407] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:25.472 [2024-07-23 15:21:20.670437] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:25.472 [2024-07-23 15:21:20.670453] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:25.472 [2024-07-23 15:21:20.670467] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state configuring 00:27:25.472 request: 00:27:25.472 { 00:27:25.472 "name": "raid_bdev1", 00:27:25.472 "raid_level": "raid5f", 00:27:25.472 "base_bdevs": [ 00:27:25.472 "malloc1", 00:27:25.472 "malloc2", 00:27:25.472 "malloc3" 00:27:25.472 ], 00:27:25.472 "strip_size_kb": 64, 00:27:25.472 "superblock": false, 00:27:25.472 "method": "bdev_raid_create", 00:27:25.472 "req_id": 1 00:27:25.472 } 00:27:25.472 Got JSON-RPC error response 00:27:25.472 response: 00:27:25.472 { 00:27:25.472 "code": -17, 00:27:25.472 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:25.472 } 00:27:25.472 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:27:25.472 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:25.472 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:25.472 15:21:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:25.472 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:25.472 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.731 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:25.731 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:25.731 15:21:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:25.731 [2024-07-23 15:21:21.116024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:25.731 [2024-07-23 15:21:21.116099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:25.731 [2024-07-23 15:21:21.116122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:27:25.731 [2024-07-23 15:21:21.116138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:25.731 [2024-07-23 15:21:21.118604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:25.731 [2024-07-23 15:21:21.118808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:25.731 [2024-07-23 15:21:21.118902] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:25.731 [2024-07-23 15:21:21.118949] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:25.731 pt1 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.731 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.990 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:25.990 "name": "raid_bdev1", 00:27:25.990 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:25.990 "strip_size_kb": 64, 00:27:25.990 "state": "configuring", 00:27:25.990 "raid_level": "raid5f", 00:27:25.990 "superblock": true, 00:27:25.990 "num_base_bdevs": 3, 00:27:25.990 "num_base_bdevs_discovered": 1, 00:27:25.990 "num_base_bdevs_operational": 3, 00:27:25.990 "base_bdevs_list": [ 00:27:25.990 { 00:27:25.990 "name": "pt1", 00:27:25.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:25.990 "is_configured": true, 00:27:25.990 "data_offset": 2048, 00:27:25.990 "data_size": 63488 00:27:25.990 }, 00:27:25.990 { 00:27:25.990 "name": null, 00:27:25.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.990 "is_configured": false, 00:27:25.990 "data_offset": 2048, 00:27:25.990 "data_size": 63488 00:27:25.990 }, 00:27:25.990 { 00:27:25.990 "name": null, 00:27:25.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:25.990 "is_configured": false, 00:27:25.990 "data_offset": 2048, 00:27:25.990 "data_size": 63488 00:27:25.990 } 00:27:25.990 ] 00:27:25.990 }' 00:27:25.990 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:25.990 15:21:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.249 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:27:26.249 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:26.507 [2024-07-23 15:21:21.744191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:26.507 [2024-07-23 15:21:21.744278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:26.507 [2024-07-23 15:21:21.744306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:27:26.507 [2024-07-23 15:21:21.744321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:26.507 [2024-07-23 15:21:21.744739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:26.507 [2024-07-23 15:21:21.744765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:26.507 [2024-07-23 15:21:21.744856] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:26.507 [2024-07-23 15:21:21.744884] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:26.507 pt2 00:27:26.507 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:26.765 [2024-07-23 15:21:21.980310] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.765 15:21:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.023 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:27.023 "name": "raid_bdev1", 00:27:27.023 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:27.023 "strip_size_kb": 64, 00:27:27.023 "state": "configuring", 00:27:27.023 "raid_level": "raid5f", 00:27:27.023 "superblock": true, 00:27:27.023 "num_base_bdevs": 3, 00:27:27.023 "num_base_bdevs_discovered": 1, 00:27:27.023 "num_base_bdevs_operational": 3, 00:27:27.023 "base_bdevs_list": [ 00:27:27.023 { 00:27:27.023 "name": "pt1", 00:27:27.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:27.023 "is_configured": true, 00:27:27.023 "data_offset": 2048, 00:27:27.023 "data_size": 63488 00:27:27.023 }, 00:27:27.023 { 00:27:27.023 "name": null, 00:27:27.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:27.023 "is_configured": false, 00:27:27.023 "data_offset": 2048, 00:27:27.023 "data_size": 63488 00:27:27.023 }, 00:27:27.023 { 00:27:27.024 "name": null, 00:27:27.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:27.024 "is_configured": false, 00:27:27.024 "data_offset": 2048, 00:27:27.024 "data_size": 63488 00:27:27.024 } 00:27:27.024 ] 00:27:27.024 }' 00:27:27.024 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:27.024 15:21:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.282 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:27.282 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:27.282 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:27.541 [2024-07-23 15:21:22.748448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:27.541 [2024-07-23 15:21:22.748528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:27.541 [2024-07-23 15:21:22.748557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:27:27.541 [2024-07-23 15:21:22.748569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:27.541 [2024-07-23 15:21:22.748999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:27.541 [2024-07-23 15:21:22.749021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:27.541 [2024-07-23 15:21:22.749093] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:27.541 [2024-07-23 15:21:22.749116] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:27.541 pt2 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:27.541 [2024-07-23 15:21:22.928447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:27.541 [2024-07-23 15:21:22.928698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:27.541 [2024-07-23 15:21:22.928763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:27:27.541 [2024-07-23 15:21:22.928869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:27.541 [2024-07-23 15:21:22.929314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:27.541 [2024-07-23 15:21:22.929441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:27.541 [2024-07-23 15:21:22.929599] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:27.541 [2024-07-23 15:21:22.929696] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:27.541 [2024-07-23 15:21:22.929873] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:27:27.541 [2024-07-23 15:21:22.929967] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:27.541 [2024-07-23 15:21:22.930076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:27:27.541 [2024-07-23 15:21:22.930740] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:27:27.541 [2024-07-23 15:21:22.930874] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:27:27.541 [2024-07-23 15:21:22.931089] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.541 pt3 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.541 15:21:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.799 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:27.799 "name": "raid_bdev1", 00:27:27.799 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:27.799 "strip_size_kb": 64, 00:27:27.799 "state": "online", 00:27:27.799 "raid_level": "raid5f", 00:27:27.799 "superblock": true, 00:27:27.799 "num_base_bdevs": 3, 00:27:27.799 "num_base_bdevs_discovered": 3, 00:27:27.799 "num_base_bdevs_operational": 3, 00:27:27.799 "base_bdevs_list": [ 00:27:27.799 { 00:27:27.799 "name": "pt1", 00:27:27.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:27.799 "is_configured": true, 00:27:27.799 "data_offset": 2048, 00:27:27.799 "data_size": 63488 00:27:27.799 }, 00:27:27.799 { 00:27:27.799 "name": "pt2", 00:27:27.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:27.799 "is_configured": true, 00:27:27.799 "data_offset": 2048, 00:27:27.799 "data_size": 63488 00:27:27.799 }, 00:27:27.799 { 00:27:27.799 "name": "pt3", 00:27:27.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:27.799 "is_configured": true, 00:27:27.799 "data_offset": 2048, 00:27:27.799 "data_size": 63488 00:27:27.799 } 00:27:27.799 ] 00:27:27.799 }' 00:27:27.799 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:27.799 15:21:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.057 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:28.057 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:28.057 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:28.057 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:28.057 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:28.057 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:28.057 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:28.057 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:28.315 [2024-07-23 15:21:23.640801] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:28.315 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:28.315 "name": "raid_bdev1", 00:27:28.315 "aliases": [ 00:27:28.315 "d65a548e-c92f-4104-a0f1-6cd57f79e6c4" 00:27:28.315 ], 00:27:28.315 "product_name": "Raid Volume", 00:27:28.315 "block_size": 512, 00:27:28.315 "num_blocks": 126976, 00:27:28.315 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:28.315 "assigned_rate_limits": { 00:27:28.315 "rw_ios_per_sec": 0, 00:27:28.315 "rw_mbytes_per_sec": 0, 00:27:28.315 "r_mbytes_per_sec": 0, 00:27:28.315 "w_mbytes_per_sec": 0 00:27:28.315 }, 00:27:28.315 "claimed": false, 00:27:28.315 "zoned": false, 00:27:28.316 "supported_io_types": { 00:27:28.316 "read": true, 00:27:28.316 "write": true, 00:27:28.316 "unmap": false, 00:27:28.316 "flush": false, 00:27:28.316 "reset": true, 00:27:28.316 "nvme_admin": false, 00:27:28.316 "nvme_io": false, 00:27:28.316 "nvme_io_md": false, 00:27:28.316 "write_zeroes": true, 00:27:28.316 "zcopy": false, 00:27:28.316 "get_zone_info": false, 00:27:28.316 "zone_management": false, 00:27:28.316 "zone_append": false, 00:27:28.316 "compare": false, 00:27:28.316 "compare_and_write": false, 00:27:28.316 "abort": false, 00:27:28.316 "seek_hole": false, 00:27:28.316 "seek_data": false, 00:27:28.316 "copy": false, 00:27:28.316 "nvme_iov_md": false 00:27:28.316 }, 00:27:28.316 "driver_specific": { 00:27:28.316 "raid": { 00:27:28.316 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:28.316 "strip_size_kb": 64, 00:27:28.316 "state": "online", 00:27:28.316 "raid_level": "raid5f", 00:27:28.316 "superblock": true, 00:27:28.316 "num_base_bdevs": 3, 00:27:28.316 "num_base_bdevs_discovered": 3, 00:27:28.316 "num_base_bdevs_operational": 3, 00:27:28.316 "base_bdevs_list": [ 00:27:28.316 { 00:27:28.316 "name": "pt1", 00:27:28.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:28.316 "is_configured": true, 00:27:28.316 "data_offset": 2048, 00:27:28.316 "data_size": 63488 00:27:28.316 }, 00:27:28.316 { 00:27:28.316 "name": "pt2", 00:27:28.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:28.316 "is_configured": true, 00:27:28.316 "data_offset": 2048, 00:27:28.316 "data_size": 63488 00:27:28.316 }, 00:27:28.316 { 00:27:28.316 "name": "pt3", 00:27:28.316 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:28.316 "is_configured": true, 00:27:28.316 "data_offset": 2048, 00:27:28.316 "data_size": 63488 00:27:28.316 } 00:27:28.316 ] 00:27:28.316 } 00:27:28.316 } 00:27:28.316 }' 00:27:28.316 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:28.316 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:28.316 pt2 00:27:28.316 pt3' 00:27:28.316 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:28.316 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:28.316 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:28.574 "name": "pt1", 00:27:28.574 "aliases": [ 00:27:28.574 "00000000-0000-0000-0000-000000000001" 00:27:28.574 ], 00:27:28.574 "product_name": "passthru", 00:27:28.574 "block_size": 512, 00:27:28.574 "num_blocks": 65536, 00:27:28.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:28.574 "assigned_rate_limits": { 00:27:28.574 "rw_ios_per_sec": 0, 00:27:28.574 "rw_mbytes_per_sec": 0, 00:27:28.574 "r_mbytes_per_sec": 0, 00:27:28.574 "w_mbytes_per_sec": 0 00:27:28.574 }, 00:27:28.574 "claimed": true, 00:27:28.574 "claim_type": "exclusive_write", 00:27:28.574 "zoned": false, 00:27:28.574 "supported_io_types": { 00:27:28.574 "read": true, 00:27:28.574 "write": true, 00:27:28.574 "unmap": true, 00:27:28.574 "flush": true, 00:27:28.574 "reset": true, 00:27:28.574 "nvme_admin": false, 00:27:28.574 "nvme_io": false, 00:27:28.574 "nvme_io_md": false, 00:27:28.574 "write_zeroes": true, 00:27:28.574 "zcopy": true, 00:27:28.574 "get_zone_info": false, 00:27:28.574 "zone_management": false, 00:27:28.574 "zone_append": false, 00:27:28.574 "compare": false, 00:27:28.574 "compare_and_write": false, 00:27:28.574 "abort": true, 00:27:28.574 "seek_hole": false, 00:27:28.574 "seek_data": false, 00:27:28.574 "copy": true, 00:27:28.574 "nvme_iov_md": false 00:27:28.574 }, 00:27:28.574 "memory_domains": [ 00:27:28.574 { 00:27:28.574 "dma_device_id": "system", 00:27:28.574 "dma_device_type": 1 00:27:28.574 }, 00:27:28.574 { 00:27:28.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.574 "dma_device_type": 2 00:27:28.574 } 00:27:28.574 ], 00:27:28.574 "driver_specific": { 00:27:28.574 "passthru": { 00:27:28.574 "name": "pt1", 00:27:28.574 "base_bdev_name": "malloc1" 00:27:28.574 } 00:27:28.574 } 00:27:28.574 }' 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:28.574 15:21:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:28.574 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:28.832 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:28.832 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:28.832 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:28.832 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:28.832 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:29.121 "name": "pt2", 00:27:29.121 "aliases": [ 00:27:29.121 "00000000-0000-0000-0000-000000000002" 00:27:29.121 ], 00:27:29.121 "product_name": "passthru", 00:27:29.121 "block_size": 512, 00:27:29.121 "num_blocks": 65536, 00:27:29.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:29.121 "assigned_rate_limits": { 00:27:29.121 "rw_ios_per_sec": 0, 00:27:29.121 "rw_mbytes_per_sec": 0, 00:27:29.121 "r_mbytes_per_sec": 0, 00:27:29.121 "w_mbytes_per_sec": 0 00:27:29.121 }, 00:27:29.121 "claimed": true, 00:27:29.121 "claim_type": "exclusive_write", 00:27:29.121 "zoned": false, 00:27:29.121 "supported_io_types": { 00:27:29.121 "read": true, 00:27:29.121 "write": true, 00:27:29.121 "unmap": true, 00:27:29.121 "flush": true, 00:27:29.121 "reset": true, 00:27:29.121 "nvme_admin": false, 00:27:29.121 "nvme_io": false, 00:27:29.121 "nvme_io_md": false, 00:27:29.121 "write_zeroes": true, 00:27:29.121 "zcopy": true, 00:27:29.121 "get_zone_info": false, 00:27:29.121 "zone_management": false, 00:27:29.121 "zone_append": false, 00:27:29.121 "compare": false, 00:27:29.121 "compare_and_write": false, 00:27:29.121 "abort": true, 00:27:29.121 "seek_hole": false, 00:27:29.121 "seek_data": false, 00:27:29.121 "copy": true, 00:27:29.121 "nvme_iov_md": false 00:27:29.121 }, 00:27:29.121 "memory_domains": [ 00:27:29.121 { 00:27:29.121 "dma_device_id": "system", 00:27:29.121 "dma_device_type": 1 00:27:29.121 }, 00:27:29.121 { 00:27:29.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.121 "dma_device_type": 2 00:27:29.121 } 00:27:29.121 ], 00:27:29.121 "driver_specific": { 00:27:29.121 "passthru": { 00:27:29.121 "name": "pt2", 00:27:29.121 "base_bdev_name": "malloc2" 00:27:29.121 } 00:27:29.121 } 00:27:29.121 }' 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:29.121 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:29.380 "name": "pt3", 00:27:29.380 "aliases": [ 00:27:29.380 "00000000-0000-0000-0000-000000000003" 00:27:29.380 ], 00:27:29.380 "product_name": "passthru", 00:27:29.380 "block_size": 512, 00:27:29.380 "num_blocks": 65536, 00:27:29.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:29.380 "assigned_rate_limits": { 00:27:29.380 "rw_ios_per_sec": 0, 00:27:29.380 "rw_mbytes_per_sec": 0, 00:27:29.380 "r_mbytes_per_sec": 0, 00:27:29.380 "w_mbytes_per_sec": 0 00:27:29.380 }, 00:27:29.380 "claimed": true, 00:27:29.380 "claim_type": "exclusive_write", 00:27:29.380 "zoned": false, 00:27:29.380 "supported_io_types": { 00:27:29.380 "read": true, 00:27:29.380 "write": true, 00:27:29.380 "unmap": true, 00:27:29.380 "flush": true, 00:27:29.380 "reset": true, 00:27:29.380 "nvme_admin": false, 00:27:29.380 "nvme_io": false, 00:27:29.380 "nvme_io_md": false, 00:27:29.380 "write_zeroes": true, 00:27:29.380 "zcopy": true, 00:27:29.380 "get_zone_info": false, 00:27:29.380 "zone_management": false, 00:27:29.380 "zone_append": false, 00:27:29.380 "compare": false, 00:27:29.380 "compare_and_write": false, 00:27:29.380 "abort": true, 00:27:29.380 "seek_hole": false, 00:27:29.380 "seek_data": false, 00:27:29.380 "copy": true, 00:27:29.380 "nvme_iov_md": false 00:27:29.380 }, 00:27:29.380 "memory_domains": [ 00:27:29.380 { 00:27:29.380 "dma_device_id": "system", 00:27:29.380 "dma_device_type": 1 00:27:29.380 }, 00:27:29.380 { 00:27:29.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.380 "dma_device_type": 2 00:27:29.380 } 00:27:29.380 ], 00:27:29.380 "driver_specific": { 00:27:29.380 "passthru": { 00:27:29.380 "name": "pt3", 00:27:29.380 "base_bdev_name": "malloc3" 00:27:29.380 } 00:27:29.380 } 00:27:29.380 }' 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:29.380 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:29.639 [2024-07-23 15:21:24.949113] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:29.639 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d65a548e-c92f-4104-a0f1-6cd57f79e6c4 '!=' d65a548e-c92f-4104-a0f1-6cd57f79e6c4 ']' 00:27:29.639 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:27:29.639 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:29.639 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:29.639 15:21:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:29.897 [2024-07-23 15:21:25.269036] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.897 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.156 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:30.156 "name": "raid_bdev1", 00:27:30.156 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:30.156 "strip_size_kb": 64, 00:27:30.156 "state": "online", 00:27:30.156 "raid_level": "raid5f", 00:27:30.156 "superblock": true, 00:27:30.156 "num_base_bdevs": 3, 00:27:30.156 "num_base_bdevs_discovered": 2, 00:27:30.156 "num_base_bdevs_operational": 2, 00:27:30.156 "base_bdevs_list": [ 00:27:30.156 { 00:27:30.156 "name": null, 00:27:30.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.156 "is_configured": false, 00:27:30.156 "data_offset": 2048, 00:27:30.156 "data_size": 63488 00:27:30.156 }, 00:27:30.156 { 00:27:30.156 "name": "pt2", 00:27:30.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:30.156 "is_configured": true, 00:27:30.156 "data_offset": 2048, 00:27:30.156 "data_size": 63488 00:27:30.156 }, 00:27:30.156 { 00:27:30.156 "name": "pt3", 00:27:30.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:30.156 "is_configured": true, 00:27:30.156 "data_offset": 2048, 00:27:30.156 "data_size": 63488 00:27:30.156 } 00:27:30.156 ] 00:27:30.156 }' 00:27:30.156 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:30.156 15:21:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.731 15:21:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:30.731 [2024-07-23 15:21:26.005118] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:30.731 [2024-07-23 15:21:26.005346] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:30.731 [2024-07-23 15:21:26.005568] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:30.731 [2024-07-23 15:21:26.005649] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:30.731 [2024-07-23 15:21:26.005662] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:27:30.731 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:30.731 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.989 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:30.990 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:30.990 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:30.990 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:30.990 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:30.990 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:30.990 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:30.990 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:31.248 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:31.248 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:31.248 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:31.248 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:31.248 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:31.507 [2024-07-23 15:21:26.785282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:31.507 [2024-07-23 15:21:26.785548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.507 [2024-07-23 15:21:26.785585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:27:31.507 [2024-07-23 15:21:26.785598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.507 [2024-07-23 15:21:26.788029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.507 [2024-07-23 15:21:26.788060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:31.507 [2024-07-23 15:21:26.788137] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:31.507 [2024-07-23 15:21:26.788184] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:31.507 pt2 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.507 15:21:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.765 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.765 "name": "raid_bdev1", 00:27:31.765 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:31.765 "strip_size_kb": 64, 00:27:31.765 "state": "configuring", 00:27:31.765 "raid_level": "raid5f", 00:27:31.765 "superblock": true, 00:27:31.765 "num_base_bdevs": 3, 00:27:31.765 "num_base_bdevs_discovered": 1, 00:27:31.765 "num_base_bdevs_operational": 2, 00:27:31.765 "base_bdevs_list": [ 00:27:31.765 { 00:27:31.765 "name": null, 00:27:31.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.765 "is_configured": false, 00:27:31.765 "data_offset": 2048, 00:27:31.765 "data_size": 63488 00:27:31.765 }, 00:27:31.765 { 00:27:31.765 "name": "pt2", 00:27:31.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.765 "is_configured": true, 00:27:31.765 "data_offset": 2048, 00:27:31.765 "data_size": 63488 00:27:31.765 }, 00:27:31.765 { 00:27:31.765 "name": null, 00:27:31.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:31.765 "is_configured": false, 00:27:31.766 "data_offset": 2048, 00:27:31.766 "data_size": 63488 00:27:31.766 } 00:27:31.766 ] 00:27:31.766 }' 00:27:31.766 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.766 15:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.024 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:27:32.024 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:32.024 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:27:32.024 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:32.283 [2024-07-23 15:21:27.525455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:32.283 [2024-07-23 15:21:27.525674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.283 [2024-07-23 15:21:27.525737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:27:32.283 [2024-07-23 15:21:27.525835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.283 [2024-07-23 15:21:27.526256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.283 [2024-07-23 15:21:27.526276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:32.283 [2024-07-23 15:21:27.526350] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:32.283 [2024-07-23 15:21:27.526372] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:32.283 [2024-07-23 15:21:27.526473] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009c80 00:27:32.283 [2024-07-23 15:21:27.526483] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:32.283 [2024-07-23 15:21:27.526544] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:27:32.283 [2024-07-23 15:21:27.527341] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009c80 00:27:32.283 [2024-07-23 15:21:27.527467] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009c80 00:27:32.283 [2024-07-23 15:21:27.527791] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.283 pt3 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.283 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.541 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.541 "name": "raid_bdev1", 00:27:32.541 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:32.541 "strip_size_kb": 64, 00:27:32.541 "state": "online", 00:27:32.541 "raid_level": "raid5f", 00:27:32.541 "superblock": true, 00:27:32.541 "num_base_bdevs": 3, 00:27:32.541 "num_base_bdevs_discovered": 2, 00:27:32.541 "num_base_bdevs_operational": 2, 00:27:32.541 "base_bdevs_list": [ 00:27:32.541 { 00:27:32.541 "name": null, 00:27:32.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.541 "is_configured": false, 00:27:32.541 "data_offset": 2048, 00:27:32.541 "data_size": 63488 00:27:32.541 }, 00:27:32.541 { 00:27:32.541 "name": "pt2", 00:27:32.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:32.541 "is_configured": true, 00:27:32.541 "data_offset": 2048, 00:27:32.541 "data_size": 63488 00:27:32.541 }, 00:27:32.541 { 00:27:32.541 "name": "pt3", 00:27:32.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:32.541 "is_configured": true, 00:27:32.541 "data_offset": 2048, 00:27:32.541 "data_size": 63488 00:27:32.541 } 00:27:32.541 ] 00:27:32.541 }' 00:27:32.541 15:21:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.541 15:21:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.800 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:33.058 [2024-07-23 15:21:28.305597] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.058 [2024-07-23 15:21:28.305848] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:33.058 [2024-07-23 15:21:28.306050] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.058 [2024-07-23 15:21:28.306149] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:33.058 [2024-07-23 15:21:28.306442] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009c80 name raid_bdev1, state offline 00:27:33.058 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:33.058 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.316 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:33.316 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:33.316 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:27:33.316 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:27:33.316 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:33.575 15:21:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:33.833 [2024-07-23 15:21:29.033726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:33.833 [2024-07-23 15:21:29.034017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.833 [2024-07-23 15:21:29.034052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:27:33.833 [2024-07-23 15:21:29.034076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.833 [2024-07-23 15:21:29.036620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.833 [2024-07-23 15:21:29.036667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:33.833 [2024-07-23 15:21:29.036743] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:33.833 [2024-07-23 15:21:29.036785] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:33.833 [2024-07-23 15:21:29.036913] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:33.833 [2024-07-23 15:21:29.036940] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.833 [2024-07-23 15:21:29.036961] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state configuring 00:27:33.833 [2024-07-23 15:21:29.037004] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:33.833 pt1 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.833 "name": "raid_bdev1", 00:27:33.833 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:33.833 "strip_size_kb": 64, 00:27:33.833 "state": "configuring", 00:27:33.833 "raid_level": "raid5f", 00:27:33.833 "superblock": true, 00:27:33.833 "num_base_bdevs": 3, 00:27:33.833 "num_base_bdevs_discovered": 1, 00:27:33.833 "num_base_bdevs_operational": 2, 00:27:33.833 "base_bdevs_list": [ 00:27:33.833 { 00:27:33.833 "name": null, 00:27:33.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.833 "is_configured": false, 00:27:33.833 "data_offset": 2048, 00:27:33.833 "data_size": 63488 00:27:33.833 }, 00:27:33.833 { 00:27:33.833 "name": "pt2", 00:27:33.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:33.833 "is_configured": true, 00:27:33.833 "data_offset": 2048, 00:27:33.833 "data_size": 63488 00:27:33.833 }, 00:27:33.833 { 00:27:33.833 "name": null, 00:27:33.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:33.833 "is_configured": false, 00:27:33.833 "data_offset": 2048, 00:27:33.833 "data_size": 63488 00:27:33.833 } 00:27:33.833 ] 00:27:33.833 }' 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.833 15:21:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.091 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:27:34.091 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:34.349 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:27:34.349 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:34.607 [2024-07-23 15:21:29.922576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:34.607 [2024-07-23 15:21:29.922865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.607 [2024-07-23 15:21:29.922981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:27:34.607 [2024-07-23 15:21:29.923003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.607 [2024-07-23 15:21:29.923480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.607 [2024-07-23 15:21:29.923513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:34.607 [2024-07-23 15:21:29.923593] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:34.607 [2024-07-23 15:21:29.923628] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:34.607 [2024-07-23 15:21:29.923739] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:27:34.607 [2024-07-23 15:21:29.923755] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:34.607 [2024-07-23 15:21:29.923859] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:27:34.607 [2024-07-23 15:21:29.924678] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:27:34.607 [2024-07-23 15:21:29.924703] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:27:34.607 [2024-07-23 15:21:29.924886] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.607 pt3 00:27:34.607 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:34.607 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.607 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.607 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:34.608 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:34.608 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:34.608 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.608 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.608 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.608 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.608 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.608 15:21:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.866 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.866 "name": "raid_bdev1", 00:27:34.866 "uuid": "d65a548e-c92f-4104-a0f1-6cd57f79e6c4", 00:27:34.866 "strip_size_kb": 64, 00:27:34.866 "state": "online", 00:27:34.866 "raid_level": "raid5f", 00:27:34.866 "superblock": true, 00:27:34.866 "num_base_bdevs": 3, 00:27:34.866 "num_base_bdevs_discovered": 2, 00:27:34.866 "num_base_bdevs_operational": 2, 00:27:34.866 "base_bdevs_list": [ 00:27:34.866 { 00:27:34.866 "name": null, 00:27:34.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.866 "is_configured": false, 00:27:34.866 "data_offset": 2048, 00:27:34.866 "data_size": 63488 00:27:34.866 }, 00:27:34.866 { 00:27:34.866 "name": "pt2", 00:27:34.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:34.866 "is_configured": true, 00:27:34.866 "data_offset": 2048, 00:27:34.866 "data_size": 63488 00:27:34.866 }, 00:27:34.866 { 00:27:34.866 "name": "pt3", 00:27:34.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:34.866 "is_configured": true, 00:27:34.866 "data_offset": 2048, 00:27:34.866 "data_size": 63488 00:27:34.866 } 00:27:34.866 ] 00:27:34.866 }' 00:27:34.866 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.866 15:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.125 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:35.125 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:35.383 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:35.383 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:35.383 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:35.672 [2024-07-23 15:21:30.935130] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' d65a548e-c92f-4104-a0f1-6cd57f79e6c4 '!=' d65a548e-c92f-4104-a0f1-6cd57f79e6c4 ']' 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 114199 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 114199 ']' 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 114199 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114199 00:27:35.672 killing process with pid 114199 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:35.672 15:21:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:35.672 15:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114199' 00:27:35.672 15:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 114199 00:27:35.672 15:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 114199 00:27:35.672 [2024-07-23 15:21:31.002268] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:35.672 [2024-07-23 15:21:31.002382] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:35.672 [2024-07-23 15:21:31.002469] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:35.672 [2024-07-23 15:21:31.002493] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:27:35.672 [2024-07-23 15:21:31.039991] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:35.931 ************************************ 00:27:35.931 END TEST raid5f_superblock_test 00:27:35.931 ************************************ 00:27:35.931 15:21:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:27:35.931 00:27:35.931 real 0m16.268s 00:27:35.931 user 0m28.107s 00:27:35.931 sys 0m3.571s 00:27:35.931 15:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.931 15:21:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.931 15:21:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:35.931 15:21:31 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:27:35.931 15:21:31 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:27:35.931 15:21:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:35.931 15:21:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.931 15:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:35.931 ************************************ 00:27:35.931 START TEST raid5f_rebuild_test 00:27:35.931 ************************************ 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 false false true 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:35.931 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:35.932 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:27:35.932 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:27:35.932 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:27:35.932 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:27:35.932 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=114846 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 114846 /var/tmp/spdk-raid.sock 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 114846 ']' 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.191 15:21:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.191 [2024-07-23 15:21:31.431292] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:27:36.191 [2024-07-23 15:21:31.431484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114846 ] 00:27:36.191 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:36.191 Zero copy mechanism will not be used. 00:27:36.191 [2024-07-23 15:21:31.588562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.449 [2024-07-23 15:21:31.645609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.449 [2024-07-23 15:21:31.699004] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:37.016 15:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.016 15:21:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:27:37.016 15:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:37.016 15:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:37.274 BaseBdev1_malloc 00:27:37.274 15:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:37.532 [2024-07-23 15:21:32.713072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:37.532 [2024-07-23 15:21:32.713166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.532 [2024-07-23 15:21:32.713201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:27:37.532 [2024-07-23 15:21:32.713214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.532 [2024-07-23 15:21:32.715809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.532 [2024-07-23 15:21:32.715859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:37.532 BaseBdev1 00:27:37.532 15:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:37.532 15:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:37.532 BaseBdev2_malloc 00:27:37.790 15:21:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:37.790 [2024-07-23 15:21:33.126479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:37.790 [2024-07-23 15:21:33.126555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:37.790 [2024-07-23 15:21:33.126587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:27:37.790 [2024-07-23 15:21:33.126599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:37.790 [2024-07-23 15:21:33.129060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:37.790 [2024-07-23 15:21:33.129104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:37.790 BaseBdev2 00:27:37.790 15:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:37.790 15:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:38.050 BaseBdev3_malloc 00:27:38.050 15:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:38.309 [2024-07-23 15:21:33.512392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:38.309 [2024-07-23 15:21:33.512473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.309 [2024-07-23 15:21:33.512506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:27:38.309 [2024-07-23 15:21:33.512519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.309 [2024-07-23 15:21:33.514981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.309 [2024-07-23 15:21:33.515021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:38.309 BaseBdev3 00:27:38.309 15:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:38.309 spare_malloc 00:27:38.309 15:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:38.568 spare_delay 00:27:38.568 15:21:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:38.827 [2024-07-23 15:21:34.041767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:38.827 [2024-07-23 15:21:34.041851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.827 [2024-07-23 15:21:34.041887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:27:38.827 [2024-07-23 15:21:34.041902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.827 [2024-07-23 15:21:34.044490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.827 [2024-07-23 15:21:34.044532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:38.827 spare 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:27:38.827 [2024-07-23 15:21:34.209856] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:38.827 [2024-07-23 15:21:34.212200] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:38.827 [2024-07-23 15:21:34.212271] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:38.827 [2024-07-23 15:21:34.212366] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:27:38.827 [2024-07-23 15:21:34.212386] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:27:38.827 [2024-07-23 15:21:34.212528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:27:38.827 [2024-07-23 15:21:34.213203] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:27:38.827 [2024-07-23 15:21:34.213226] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:27:38.827 [2024-07-23 15:21:34.213389] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.827 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.086 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:39.086 "name": "raid_bdev1", 00:27:39.086 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:39.086 "strip_size_kb": 64, 00:27:39.086 "state": "online", 00:27:39.086 "raid_level": "raid5f", 00:27:39.086 "superblock": false, 00:27:39.086 "num_base_bdevs": 3, 00:27:39.086 "num_base_bdevs_discovered": 3, 00:27:39.086 "num_base_bdevs_operational": 3, 00:27:39.086 "base_bdevs_list": [ 00:27:39.086 { 00:27:39.086 "name": "BaseBdev1", 00:27:39.087 "uuid": "158609c3-1ff2-5ddb-be40-4f0f384bee6c", 00:27:39.087 "is_configured": true, 00:27:39.087 "data_offset": 0, 00:27:39.087 "data_size": 65536 00:27:39.087 }, 00:27:39.087 { 00:27:39.087 "name": "BaseBdev2", 00:27:39.087 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:39.087 "is_configured": true, 00:27:39.087 "data_offset": 0, 00:27:39.087 "data_size": 65536 00:27:39.087 }, 00:27:39.087 { 00:27:39.087 "name": "BaseBdev3", 00:27:39.087 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:39.087 "is_configured": true, 00:27:39.087 "data_offset": 0, 00:27:39.087 "data_size": 65536 00:27:39.087 } 00:27:39.087 ] 00:27:39.087 }' 00:27:39.087 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:39.087 15:21:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.345 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:39.345 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:39.603 [2024-07-23 15:21:34.831313] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:39.603 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:27:39.603 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:39.603 15:21:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:39.862 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:40.121 [2024-07-23 15:21:35.315229] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:27:40.121 /dev/nbd0 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:40.121 1+0 records in 00:27:40.121 1+0 records out 00:27:40.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285577 s, 14.3 MB/s 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:27:40.121 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:27:40.379 512+0 records in 00:27:40.379 512+0 records out 00:27:40.379 67108864 bytes (67 MB, 64 MiB) copied, 0.326948 s, 205 MB/s 00:27:40.379 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:40.379 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:40.379 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:40.379 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:40.379 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:40.379 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:40.379 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:40.637 [2024-07-23 15:21:35.928125] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:40.637 15:21:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:40.895 [2024-07-23 15:21:36.137833] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.895 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.154 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.154 "name": "raid_bdev1", 00:27:41.154 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:41.154 "strip_size_kb": 64, 00:27:41.154 "state": "online", 00:27:41.154 "raid_level": "raid5f", 00:27:41.154 "superblock": false, 00:27:41.154 "num_base_bdevs": 3, 00:27:41.154 "num_base_bdevs_discovered": 2, 00:27:41.154 "num_base_bdevs_operational": 2, 00:27:41.154 "base_bdevs_list": [ 00:27:41.154 { 00:27:41.154 "name": null, 00:27:41.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.154 "is_configured": false, 00:27:41.154 "data_offset": 0, 00:27:41.154 "data_size": 65536 00:27:41.154 }, 00:27:41.154 { 00:27:41.154 "name": "BaseBdev2", 00:27:41.154 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:41.154 "is_configured": true, 00:27:41.154 "data_offset": 0, 00:27:41.154 "data_size": 65536 00:27:41.154 }, 00:27:41.154 { 00:27:41.154 "name": "BaseBdev3", 00:27:41.154 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:41.154 "is_configured": true, 00:27:41.154 "data_offset": 0, 00:27:41.154 "data_size": 65536 00:27:41.154 } 00:27:41.154 ] 00:27:41.154 }' 00:27:41.154 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.154 15:21:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.446 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:41.704 [2024-07-23 15:21:36.946008] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:41.704 [2024-07-23 15:21:36.950199] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000278c0 00:27:41.704 [2024-07-23 15:21:36.952983] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:41.704 15:21:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:42.641 15:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.641 15:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.641 15:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.641 15:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.641 15:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.641 15:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.641 15:21:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.900 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.900 "name": "raid_bdev1", 00:27:42.900 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:42.900 "strip_size_kb": 64, 00:27:42.900 "state": "online", 00:27:42.900 "raid_level": "raid5f", 00:27:42.900 "superblock": false, 00:27:42.900 "num_base_bdevs": 3, 00:27:42.900 "num_base_bdevs_discovered": 3, 00:27:42.900 "num_base_bdevs_operational": 3, 00:27:42.900 "process": { 00:27:42.900 "type": "rebuild", 00:27:42.900 "target": "spare", 00:27:42.900 "progress": { 00:27:42.900 "blocks": 22528, 00:27:42.900 "percent": 17 00:27:42.900 } 00:27:42.900 }, 00:27:42.900 "base_bdevs_list": [ 00:27:42.900 { 00:27:42.900 "name": "spare", 00:27:42.900 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:42.900 "is_configured": true, 00:27:42.900 "data_offset": 0, 00:27:42.900 "data_size": 65536 00:27:42.900 }, 00:27:42.900 { 00:27:42.900 "name": "BaseBdev2", 00:27:42.900 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:42.900 "is_configured": true, 00:27:42.900 "data_offset": 0, 00:27:42.900 "data_size": 65536 00:27:42.900 }, 00:27:42.900 { 00:27:42.900 "name": "BaseBdev3", 00:27:42.900 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:42.900 "is_configured": true, 00:27:42.900 "data_offset": 0, 00:27:42.900 "data_size": 65536 00:27:42.900 } 00:27:42.900 ] 00:27:42.900 }' 00:27:42.900 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.900 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:42.900 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.900 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:42.900 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:43.159 [2024-07-23 15:21:38.423449] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:43.159 [2024-07-23 15:21:38.467191] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:43.159 [2024-07-23 15:21:38.467272] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:43.159 [2024-07-23 15:21:38.467291] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:43.159 [2024-07-23 15:21:38.467309] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.159 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.418 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.418 "name": "raid_bdev1", 00:27:43.418 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:43.418 "strip_size_kb": 64, 00:27:43.418 "state": "online", 00:27:43.418 "raid_level": "raid5f", 00:27:43.418 "superblock": false, 00:27:43.418 "num_base_bdevs": 3, 00:27:43.418 "num_base_bdevs_discovered": 2, 00:27:43.418 "num_base_bdevs_operational": 2, 00:27:43.418 "base_bdevs_list": [ 00:27:43.418 { 00:27:43.418 "name": null, 00:27:43.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.418 "is_configured": false, 00:27:43.418 "data_offset": 0, 00:27:43.418 "data_size": 65536 00:27:43.418 }, 00:27:43.418 { 00:27:43.418 "name": "BaseBdev2", 00:27:43.419 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:43.419 "is_configured": true, 00:27:43.419 "data_offset": 0, 00:27:43.419 "data_size": 65536 00:27:43.419 }, 00:27:43.419 { 00:27:43.419 "name": "BaseBdev3", 00:27:43.419 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:43.419 "is_configured": true, 00:27:43.419 "data_offset": 0, 00:27:43.419 "data_size": 65536 00:27:43.419 } 00:27:43.419 ] 00:27:43.419 }' 00:27:43.419 15:21:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.419 15:21:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.678 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:43.678 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:43.678 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:43.678 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:43.678 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:43.678 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.678 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.937 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.937 "name": "raid_bdev1", 00:27:43.937 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:43.937 "strip_size_kb": 64, 00:27:43.937 "state": "online", 00:27:43.937 "raid_level": "raid5f", 00:27:43.937 "superblock": false, 00:27:43.937 "num_base_bdevs": 3, 00:27:43.937 "num_base_bdevs_discovered": 2, 00:27:43.937 "num_base_bdevs_operational": 2, 00:27:43.937 "base_bdevs_list": [ 00:27:43.937 { 00:27:43.937 "name": null, 00:27:43.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.937 "is_configured": false, 00:27:43.937 "data_offset": 0, 00:27:43.937 "data_size": 65536 00:27:43.937 }, 00:27:43.937 { 00:27:43.937 "name": "BaseBdev2", 00:27:43.937 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:43.937 "is_configured": true, 00:27:43.937 "data_offset": 0, 00:27:43.937 "data_size": 65536 00:27:43.937 }, 00:27:43.937 { 00:27:43.937 "name": "BaseBdev3", 00:27:43.937 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:43.937 "is_configured": true, 00:27:43.937 "data_offset": 0, 00:27:43.937 "data_size": 65536 00:27:43.937 } 00:27:43.937 ] 00:27:43.937 }' 00:27:43.937 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:43.937 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:43.937 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:43.937 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:43.937 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:44.196 [2024-07-23 15:21:39.385702] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:44.196 [2024-07-23 15:21:39.389643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000027990 00:27:44.196 [2024-07-23 15:21:39.392172] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:44.196 15:21:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:45.134 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.134 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.134 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.134 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.134 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.134 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.134 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.394 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.394 "name": "raid_bdev1", 00:27:45.394 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:45.394 "strip_size_kb": 64, 00:27:45.394 "state": "online", 00:27:45.394 "raid_level": "raid5f", 00:27:45.394 "superblock": false, 00:27:45.394 "num_base_bdevs": 3, 00:27:45.394 "num_base_bdevs_discovered": 3, 00:27:45.394 "num_base_bdevs_operational": 3, 00:27:45.394 "process": { 00:27:45.394 "type": "rebuild", 00:27:45.394 "target": "spare", 00:27:45.394 "progress": { 00:27:45.394 "blocks": 24576, 00:27:45.394 "percent": 18 00:27:45.394 } 00:27:45.394 }, 00:27:45.394 "base_bdevs_list": [ 00:27:45.394 { 00:27:45.394 "name": "spare", 00:27:45.394 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:45.394 "is_configured": true, 00:27:45.394 "data_offset": 0, 00:27:45.394 "data_size": 65536 00:27:45.394 }, 00:27:45.394 { 00:27:45.394 "name": "BaseBdev2", 00:27:45.394 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:45.394 "is_configured": true, 00:27:45.394 "data_offset": 0, 00:27:45.394 "data_size": 65536 00:27:45.394 }, 00:27:45.394 { 00:27:45.394 "name": "BaseBdev3", 00:27:45.394 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:45.394 "is_configured": true, 00:27:45.394 "data_offset": 0, 00:27:45.394 "data_size": 65536 00:27:45.395 } 00:27:45.395 ] 00:27:45.395 }' 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=820 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.395 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.654 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.654 "name": "raid_bdev1", 00:27:45.654 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:45.654 "strip_size_kb": 64, 00:27:45.654 "state": "online", 00:27:45.654 "raid_level": "raid5f", 00:27:45.654 "superblock": false, 00:27:45.654 "num_base_bdevs": 3, 00:27:45.654 "num_base_bdevs_discovered": 3, 00:27:45.654 "num_base_bdevs_operational": 3, 00:27:45.654 "process": { 00:27:45.654 "type": "rebuild", 00:27:45.654 "target": "spare", 00:27:45.654 "progress": { 00:27:45.654 "blocks": 28672, 00:27:45.654 "percent": 21 00:27:45.654 } 00:27:45.654 }, 00:27:45.654 "base_bdevs_list": [ 00:27:45.654 { 00:27:45.654 "name": "spare", 00:27:45.654 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:45.654 "is_configured": true, 00:27:45.654 "data_offset": 0, 00:27:45.654 "data_size": 65536 00:27:45.654 }, 00:27:45.654 { 00:27:45.654 "name": "BaseBdev2", 00:27:45.654 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:45.654 "is_configured": true, 00:27:45.654 "data_offset": 0, 00:27:45.654 "data_size": 65536 00:27:45.654 }, 00:27:45.654 { 00:27:45.654 "name": "BaseBdev3", 00:27:45.654 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:45.654 "is_configured": true, 00:27:45.654 "data_offset": 0, 00:27:45.654 "data_size": 65536 00:27:45.654 } 00:27:45.654 ] 00:27:45.654 }' 00:27:45.654 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.654 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.655 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.655 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.655 15:21:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:46.592 15:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:46.592 15:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:46.592 15:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:46.592 15:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:46.592 15:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:46.592 15:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:46.592 15:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.592 15:21:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.850 15:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.850 "name": "raid_bdev1", 00:27:46.850 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:46.850 "strip_size_kb": 64, 00:27:46.850 "state": "online", 00:27:46.850 "raid_level": "raid5f", 00:27:46.850 "superblock": false, 00:27:46.850 "num_base_bdevs": 3, 00:27:46.850 "num_base_bdevs_discovered": 3, 00:27:46.850 "num_base_bdevs_operational": 3, 00:27:46.850 "process": { 00:27:46.850 "type": "rebuild", 00:27:46.850 "target": "spare", 00:27:46.850 "progress": { 00:27:46.850 "blocks": 53248, 00:27:46.850 "percent": 40 00:27:46.850 } 00:27:46.850 }, 00:27:46.850 "base_bdevs_list": [ 00:27:46.850 { 00:27:46.850 "name": "spare", 00:27:46.850 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:46.850 "is_configured": true, 00:27:46.850 "data_offset": 0, 00:27:46.850 "data_size": 65536 00:27:46.850 }, 00:27:46.850 { 00:27:46.850 "name": "BaseBdev2", 00:27:46.850 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:46.850 "is_configured": true, 00:27:46.850 "data_offset": 0, 00:27:46.850 "data_size": 65536 00:27:46.850 }, 00:27:46.850 { 00:27:46.850 "name": "BaseBdev3", 00:27:46.850 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:46.850 "is_configured": true, 00:27:46.850 "data_offset": 0, 00:27:46.850 "data_size": 65536 00:27:46.850 } 00:27:46.850 ] 00:27:46.850 }' 00:27:46.850 15:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.850 15:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:46.850 15:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.850 15:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:46.850 15:21:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:47.785 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:47.785 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.785 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.785 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.785 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.785 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.785 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.785 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.043 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.043 "name": "raid_bdev1", 00:27:48.043 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:48.043 "strip_size_kb": 64, 00:27:48.043 "state": "online", 00:27:48.043 "raid_level": "raid5f", 00:27:48.043 "superblock": false, 00:27:48.043 "num_base_bdevs": 3, 00:27:48.043 "num_base_bdevs_discovered": 3, 00:27:48.043 "num_base_bdevs_operational": 3, 00:27:48.043 "process": { 00:27:48.043 "type": "rebuild", 00:27:48.043 "target": "spare", 00:27:48.043 "progress": { 00:27:48.043 "blocks": 79872, 00:27:48.043 "percent": 60 00:27:48.043 } 00:27:48.043 }, 00:27:48.043 "base_bdevs_list": [ 00:27:48.043 { 00:27:48.043 "name": "spare", 00:27:48.043 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:48.043 "is_configured": true, 00:27:48.043 "data_offset": 0, 00:27:48.043 "data_size": 65536 00:27:48.043 }, 00:27:48.043 { 00:27:48.043 "name": "BaseBdev2", 00:27:48.043 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:48.043 "is_configured": true, 00:27:48.043 "data_offset": 0, 00:27:48.043 "data_size": 65536 00:27:48.043 }, 00:27:48.043 { 00:27:48.043 "name": "BaseBdev3", 00:27:48.043 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:48.043 "is_configured": true, 00:27:48.043 "data_offset": 0, 00:27:48.043 "data_size": 65536 00:27:48.043 } 00:27:48.043 ] 00:27:48.043 }' 00:27:48.043 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.043 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:48.044 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.044 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:48.044 15:21:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:49.003 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:49.003 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.003 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.003 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:49.003 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:49.003 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.003 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.003 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.269 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.269 "name": "raid_bdev1", 00:27:49.269 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:49.269 "strip_size_kb": 64, 00:27:49.269 "state": "online", 00:27:49.269 "raid_level": "raid5f", 00:27:49.269 "superblock": false, 00:27:49.269 "num_base_bdevs": 3, 00:27:49.269 "num_base_bdevs_discovered": 3, 00:27:49.269 "num_base_bdevs_operational": 3, 00:27:49.269 "process": { 00:27:49.269 "type": "rebuild", 00:27:49.269 "target": "spare", 00:27:49.269 "progress": { 00:27:49.269 "blocks": 104448, 00:27:49.269 "percent": 79 00:27:49.269 } 00:27:49.269 }, 00:27:49.269 "base_bdevs_list": [ 00:27:49.269 { 00:27:49.269 "name": "spare", 00:27:49.269 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:49.269 "is_configured": true, 00:27:49.269 "data_offset": 0, 00:27:49.269 "data_size": 65536 00:27:49.269 }, 00:27:49.269 { 00:27:49.269 "name": "BaseBdev2", 00:27:49.269 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:49.269 "is_configured": true, 00:27:49.269 "data_offset": 0, 00:27:49.269 "data_size": 65536 00:27:49.269 }, 00:27:49.269 { 00:27:49.269 "name": "BaseBdev3", 00:27:49.269 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:49.269 "is_configured": true, 00:27:49.269 "data_offset": 0, 00:27:49.269 "data_size": 65536 00:27:49.269 } 00:27:49.269 ] 00:27:49.269 }' 00:27:49.269 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.269 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:49.269 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.269 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:49.269 15:21:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.646 [2024-07-23 15:21:45.849863] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:50.646 [2024-07-23 15:21:45.849961] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:50.646 [2024-07-23 15:21:45.850011] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.646 "name": "raid_bdev1", 00:27:50.646 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:50.646 "strip_size_kb": 64, 00:27:50.646 "state": "online", 00:27:50.646 "raid_level": "raid5f", 00:27:50.646 "superblock": false, 00:27:50.646 "num_base_bdevs": 3, 00:27:50.646 "num_base_bdevs_discovered": 3, 00:27:50.646 "num_base_bdevs_operational": 3, 00:27:50.646 "base_bdevs_list": [ 00:27:50.646 { 00:27:50.646 "name": "spare", 00:27:50.646 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:50.646 "is_configured": true, 00:27:50.646 "data_offset": 0, 00:27:50.646 "data_size": 65536 00:27:50.646 }, 00:27:50.646 { 00:27:50.646 "name": "BaseBdev2", 00:27:50.646 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:50.646 "is_configured": true, 00:27:50.646 "data_offset": 0, 00:27:50.646 "data_size": 65536 00:27:50.646 }, 00:27:50.646 { 00:27:50.646 "name": "BaseBdev3", 00:27:50.646 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:50.646 "is_configured": true, 00:27:50.646 "data_offset": 0, 00:27:50.646 "data_size": 65536 00:27:50.646 } 00:27:50.646 ] 00:27:50.646 }' 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.646 15:21:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.904 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.904 "name": "raid_bdev1", 00:27:50.904 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:50.904 "strip_size_kb": 64, 00:27:50.905 "state": "online", 00:27:50.905 "raid_level": "raid5f", 00:27:50.905 "superblock": false, 00:27:50.905 "num_base_bdevs": 3, 00:27:50.905 "num_base_bdevs_discovered": 3, 00:27:50.905 "num_base_bdevs_operational": 3, 00:27:50.905 "base_bdevs_list": [ 00:27:50.905 { 00:27:50.905 "name": "spare", 00:27:50.905 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:50.905 "is_configured": true, 00:27:50.905 "data_offset": 0, 00:27:50.905 "data_size": 65536 00:27:50.905 }, 00:27:50.905 { 00:27:50.905 "name": "BaseBdev2", 00:27:50.905 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:50.905 "is_configured": true, 00:27:50.905 "data_offset": 0, 00:27:50.905 "data_size": 65536 00:27:50.905 }, 00:27:50.905 { 00:27:50.905 "name": "BaseBdev3", 00:27:50.905 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:50.905 "is_configured": true, 00:27:50.905 "data_offset": 0, 00:27:50.905 "data_size": 65536 00:27:50.905 } 00:27:50.905 ] 00:27:50.905 }' 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.905 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.163 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:51.163 "name": "raid_bdev1", 00:27:51.163 "uuid": "899552de-a799-4483-9363-dfbcf4ebb3f4", 00:27:51.163 "strip_size_kb": 64, 00:27:51.163 "state": "online", 00:27:51.163 "raid_level": "raid5f", 00:27:51.163 "superblock": false, 00:27:51.163 "num_base_bdevs": 3, 00:27:51.163 "num_base_bdevs_discovered": 3, 00:27:51.163 "num_base_bdevs_operational": 3, 00:27:51.163 "base_bdevs_list": [ 00:27:51.163 { 00:27:51.163 "name": "spare", 00:27:51.163 "uuid": "dcd7bb16-d6c8-5f8d-9787-145fc9d3781b", 00:27:51.163 "is_configured": true, 00:27:51.163 "data_offset": 0, 00:27:51.163 "data_size": 65536 00:27:51.163 }, 00:27:51.163 { 00:27:51.163 "name": "BaseBdev2", 00:27:51.163 "uuid": "4f9b0e74-2913-5cb7-97fe-1338960c5bf9", 00:27:51.163 "is_configured": true, 00:27:51.163 "data_offset": 0, 00:27:51.163 "data_size": 65536 00:27:51.163 }, 00:27:51.163 { 00:27:51.163 "name": "BaseBdev3", 00:27:51.163 "uuid": "2f3708a9-3ab4-509a-a5c9-7aa5f208560f", 00:27:51.163 "is_configured": true, 00:27:51.163 "data_offset": 0, 00:27:51.163 "data_size": 65536 00:27:51.163 } 00:27:51.163 ] 00:27:51.163 }' 00:27:51.163 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:51.163 15:21:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.421 15:21:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:51.680 [2024-07-23 15:21:46.969147] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:51.680 [2024-07-23 15:21:46.969196] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:51.680 [2024-07-23 15:21:46.969285] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:51.680 [2024-07-23 15:21:46.969374] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:51.680 [2024-07-23 15:21:46.969386] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:27:51.680 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:27:51.680 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:51.938 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:51.938 /dev/nbd0 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.197 1+0 records in 00:27:52.197 1+0 records out 00:27:52.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018692 s, 21.9 MB/s 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:52.197 /dev/nbd1 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.197 1+0 records in 00:27:52.197 1+0 records out 00:27:52.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289054 s, 14.2 MB/s 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:52.197 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:52.455 15:21:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 114846 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 114846 ']' 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 114846 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.713 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114846 00:27:52.971 killing process with pid 114846 00:27:52.971 Received shutdown signal, test time was about 60.000000 seconds 00:27:52.971 00:27:52.971 Latency(us) 00:27:52.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.971 =================================================================================================================== 00:27:52.971 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:52.971 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:52.971 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:52.971 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114846' 00:27:52.971 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 114846 00:27:52.971 [2024-07-23 15:21:48.162734] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:52.971 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 114846 00:27:52.971 [2024-07-23 15:21:48.203108] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:53.228 15:21:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:27:53.228 00:27:53.228 real 0m17.075s 00:27:53.228 user 0m23.962s 00:27:53.228 sys 0m2.945s 00:27:53.228 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:53.228 15:21:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.228 ************************************ 00:27:53.228 END TEST raid5f_rebuild_test 00:27:53.228 ************************************ 00:27:53.228 15:21:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:53.228 15:21:48 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:27:53.228 15:21:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:53.228 15:21:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.228 15:21:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:53.228 ************************************ 00:27:53.228 START TEST raid5f_rebuild_test_sb 00:27:53.228 ************************************ 00:27:53.228 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 true false true 00:27:53.228 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:27:53.228 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:53.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=115318 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 115318 /var/tmp/spdk-raid.sock 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 115318 ']' 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.229 15:21:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.229 [2024-07-23 15:21:48.566046] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:27:53.229 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:53.229 Zero copy mechanism will not be used. 00:27:53.229 [2024-07-23 15:21:48.566266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115318 ] 00:27:53.486 [2024-07-23 15:21:48.717890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.486 [2024-07-23 15:21:48.764884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.486 [2024-07-23 15:21:48.809263] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:54.421 15:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.421 15:21:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:27:54.421 15:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:54.421 15:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:54.421 BaseBdev1_malloc 00:27:54.421 15:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:54.679 [2024-07-23 15:21:49.904267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:54.679 [2024-07-23 15:21:49.904349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.679 [2024-07-23 15:21:49.904382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:27:54.679 [2024-07-23 15:21:49.904395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.679 [2024-07-23 15:21:49.907022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.679 [2024-07-23 15:21:49.907070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:54.679 BaseBdev1 00:27:54.679 15:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:54.679 15:21:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:54.937 BaseBdev2_malloc 00:27:54.937 15:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:54.937 [2024-07-23 15:21:50.333929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:54.937 [2024-07-23 15:21:50.334010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.937 [2024-07-23 15:21:50.334043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:27:54.937 [2024-07-23 15:21:50.334055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.937 [2024-07-23 15:21:50.336537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.937 [2024-07-23 15:21:50.336581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:54.937 BaseBdev2 00:27:54.937 15:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:54.938 15:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:55.196 BaseBdev3_malloc 00:27:55.196 15:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:55.454 [2024-07-23 15:21:50.693375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:55.454 [2024-07-23 15:21:50.693451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:55.454 [2024-07-23 15:21:50.693484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:27:55.454 [2024-07-23 15:21:50.693496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:55.454 [2024-07-23 15:21:50.696005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:55.454 [2024-07-23 15:21:50.696044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:55.454 BaseBdev3 00:27:55.454 15:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:55.454 spare_malloc 00:27:55.713 15:21:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:55.713 spare_delay 00:27:55.713 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:55.971 [2024-07-23 15:21:51.211104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:55.971 [2024-07-23 15:21:51.211184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:55.971 [2024-07-23 15:21:51.211238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:27:55.971 [2024-07-23 15:21:51.211256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:55.971 [2024-07-23 15:21:51.213823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:55.971 [2024-07-23 15:21:51.213864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:55.971 spare 00:27:55.971 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:27:55.971 [2024-07-23 15:21:51.375176] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:55.971 [2024-07-23 15:21:51.377353] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:55.971 [2024-07-23 15:21:51.377423] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:55.971 [2024-07-23 15:21:51.377639] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:27:55.971 [2024-07-23 15:21:51.377665] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:55.971 [2024-07-23 15:21:51.377836] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:27:55.971 [2024-07-23 15:21:51.378503] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:27:55.972 [2024-07-23 15:21:51.378526] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:27:55.972 [2024-07-23 15:21:51.378681] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.972 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.231 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.231 "name": "raid_bdev1", 00:27:56.231 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:27:56.231 "strip_size_kb": 64, 00:27:56.231 "state": "online", 00:27:56.231 "raid_level": "raid5f", 00:27:56.231 "superblock": true, 00:27:56.231 "num_base_bdevs": 3, 00:27:56.231 "num_base_bdevs_discovered": 3, 00:27:56.231 "num_base_bdevs_operational": 3, 00:27:56.231 "base_bdevs_list": [ 00:27:56.231 { 00:27:56.231 "name": "BaseBdev1", 00:27:56.231 "uuid": "c7f5a0b3-9349-5ea1-b873-bb9641164c89", 00:27:56.231 "is_configured": true, 00:27:56.231 "data_offset": 2048, 00:27:56.231 "data_size": 63488 00:27:56.231 }, 00:27:56.231 { 00:27:56.231 "name": "BaseBdev2", 00:27:56.231 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:27:56.231 "is_configured": true, 00:27:56.231 "data_offset": 2048, 00:27:56.231 "data_size": 63488 00:27:56.231 }, 00:27:56.231 { 00:27:56.231 "name": "BaseBdev3", 00:27:56.231 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:27:56.231 "is_configured": true, 00:27:56.231 "data_offset": 2048, 00:27:56.231 "data_size": 63488 00:27:56.231 } 00:27:56.231 ] 00:27:56.231 }' 00:27:56.231 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.231 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:56.489 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:56.489 15:21:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:56.766 [2024-07-23 15:21:52.040678] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:56.766 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:27:56.766 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:56.767 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:57.046 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:57.321 [2024-07-23 15:21:52.484658] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:27:57.321 /dev/nbd0 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.321 1+0 records in 00:27:57.321 1+0 records out 00:27:57.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262392 s, 15.6 MB/s 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:27:57.321 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:27:57.580 496+0 records in 00:27:57.580 496+0 records out 00:27:57.580 65011712 bytes (65 MB, 62 MiB) copied, 0.310953 s, 209 MB/s 00:27:57.580 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:57.580 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:57.580 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:57.580 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:57.580 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:57.580 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:57.580 15:21:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:57.838 [2024-07-23 15:21:53.035988] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:57.838 [2024-07-23 15:21:53.209622] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.838 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.099 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.099 "name": "raid_bdev1", 00:27:58.099 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:27:58.099 "strip_size_kb": 64, 00:27:58.099 "state": "online", 00:27:58.099 "raid_level": "raid5f", 00:27:58.099 "superblock": true, 00:27:58.099 "num_base_bdevs": 3, 00:27:58.099 "num_base_bdevs_discovered": 2, 00:27:58.099 "num_base_bdevs_operational": 2, 00:27:58.099 "base_bdevs_list": [ 00:27:58.099 { 00:27:58.099 "name": null, 00:27:58.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.099 "is_configured": false, 00:27:58.099 "data_offset": 2048, 00:27:58.099 "data_size": 63488 00:27:58.099 }, 00:27:58.099 { 00:27:58.099 "name": "BaseBdev2", 00:27:58.099 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:27:58.099 "is_configured": true, 00:27:58.099 "data_offset": 2048, 00:27:58.099 "data_size": 63488 00:27:58.099 }, 00:27:58.099 { 00:27:58.099 "name": "BaseBdev3", 00:27:58.099 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:27:58.099 "is_configured": true, 00:27:58.099 "data_offset": 2048, 00:27:58.099 "data_size": 63488 00:27:58.099 } 00:27:58.099 ] 00:27:58.099 }' 00:27:58.099 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.099 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:58.358 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:58.616 [2024-07-23 15:21:53.849816] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:58.616 [2024-07-23 15:21:53.853939] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000251c0 00:27:58.616 [2024-07-23 15:21:53.856643] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:58.616 15:21:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:59.550 15:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:59.550 15:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:59.550 15:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:59.551 15:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:59.551 15:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:59.551 15:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.551 15:21:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.809 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:59.809 "name": "raid_bdev1", 00:27:59.809 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:27:59.809 "strip_size_kb": 64, 00:27:59.809 "state": "online", 00:27:59.809 "raid_level": "raid5f", 00:27:59.809 "superblock": true, 00:27:59.809 "num_base_bdevs": 3, 00:27:59.809 "num_base_bdevs_discovered": 3, 00:27:59.809 "num_base_bdevs_operational": 3, 00:27:59.809 "process": { 00:27:59.809 "type": "rebuild", 00:27:59.809 "target": "spare", 00:27:59.809 "progress": { 00:27:59.809 "blocks": 24576, 00:27:59.809 "percent": 19 00:27:59.809 } 00:27:59.809 }, 00:27:59.809 "base_bdevs_list": [ 00:27:59.809 { 00:27:59.809 "name": "spare", 00:27:59.809 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:27:59.809 "is_configured": true, 00:27:59.809 "data_offset": 2048, 00:27:59.809 "data_size": 63488 00:27:59.809 }, 00:27:59.809 { 00:27:59.809 "name": "BaseBdev2", 00:27:59.809 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:27:59.809 "is_configured": true, 00:27:59.809 "data_offset": 2048, 00:27:59.809 "data_size": 63488 00:27:59.809 }, 00:27:59.809 { 00:27:59.809 "name": "BaseBdev3", 00:27:59.809 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:27:59.809 "is_configured": true, 00:27:59.809 "data_offset": 2048, 00:27:59.809 "data_size": 63488 00:27:59.809 } 00:27:59.809 ] 00:27:59.809 }' 00:27:59.809 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:59.809 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:59.809 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.809 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:59.809 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:00.067 [2024-07-23 15:21:55.307535] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:00.067 [2024-07-23 15:21:55.370517] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:00.067 [2024-07-23 15:21:55.370605] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.067 [2024-07-23 15:21:55.370625] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:00.067 [2024-07-23 15:21:55.370639] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.067 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.325 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.325 "name": "raid_bdev1", 00:28:00.325 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:00.325 "strip_size_kb": 64, 00:28:00.325 "state": "online", 00:28:00.325 "raid_level": "raid5f", 00:28:00.325 "superblock": true, 00:28:00.325 "num_base_bdevs": 3, 00:28:00.325 "num_base_bdevs_discovered": 2, 00:28:00.325 "num_base_bdevs_operational": 2, 00:28:00.325 "base_bdevs_list": [ 00:28:00.325 { 00:28:00.325 "name": null, 00:28:00.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.325 "is_configured": false, 00:28:00.325 "data_offset": 2048, 00:28:00.325 "data_size": 63488 00:28:00.325 }, 00:28:00.325 { 00:28:00.325 "name": "BaseBdev2", 00:28:00.325 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:00.325 "is_configured": true, 00:28:00.325 "data_offset": 2048, 00:28:00.325 "data_size": 63488 00:28:00.325 }, 00:28:00.325 { 00:28:00.325 "name": "BaseBdev3", 00:28:00.325 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:00.325 "is_configured": true, 00:28:00.325 "data_offset": 2048, 00:28:00.325 "data_size": 63488 00:28:00.325 } 00:28:00.325 ] 00:28:00.325 }' 00:28:00.325 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.325 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.583 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:00.583 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.583 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:00.583 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:00.583 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.583 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.583 15:21:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.841 15:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:00.841 "name": "raid_bdev1", 00:28:00.842 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:00.842 "strip_size_kb": 64, 00:28:00.842 "state": "online", 00:28:00.842 "raid_level": "raid5f", 00:28:00.842 "superblock": true, 00:28:00.842 "num_base_bdevs": 3, 00:28:00.842 "num_base_bdevs_discovered": 2, 00:28:00.842 "num_base_bdevs_operational": 2, 00:28:00.842 "base_bdevs_list": [ 00:28:00.842 { 00:28:00.842 "name": null, 00:28:00.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.842 "is_configured": false, 00:28:00.842 "data_offset": 2048, 00:28:00.842 "data_size": 63488 00:28:00.842 }, 00:28:00.842 { 00:28:00.842 "name": "BaseBdev2", 00:28:00.842 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:00.842 "is_configured": true, 00:28:00.842 "data_offset": 2048, 00:28:00.842 "data_size": 63488 00:28:00.842 }, 00:28:00.842 { 00:28:00.842 "name": "BaseBdev3", 00:28:00.842 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:00.842 "is_configured": true, 00:28:00.842 "data_offset": 2048, 00:28:00.842 "data_size": 63488 00:28:00.842 } 00:28:00.842 ] 00:28:00.842 }' 00:28:00.842 15:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:00.842 15:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:00.842 15:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:00.842 15:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:00.842 15:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:01.100 [2024-07-23 15:21:56.421058] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:01.100 [2024-07-23 15:21:56.425243] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000025290 00:28:01.100 [2024-07-23 15:21:56.427874] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:01.100 15:21:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:02.032 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:02.032 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:02.032 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:02.032 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:02.032 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:02.032 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.032 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.289 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.289 "name": "raid_bdev1", 00:28:02.289 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:02.289 "strip_size_kb": 64, 00:28:02.289 "state": "online", 00:28:02.289 "raid_level": "raid5f", 00:28:02.289 "superblock": true, 00:28:02.289 "num_base_bdevs": 3, 00:28:02.289 "num_base_bdevs_discovered": 3, 00:28:02.289 "num_base_bdevs_operational": 3, 00:28:02.289 "process": { 00:28:02.289 "type": "rebuild", 00:28:02.289 "target": "spare", 00:28:02.289 "progress": { 00:28:02.289 "blocks": 24576, 00:28:02.289 "percent": 19 00:28:02.289 } 00:28:02.289 }, 00:28:02.289 "base_bdevs_list": [ 00:28:02.289 { 00:28:02.289 "name": "spare", 00:28:02.290 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:02.290 "is_configured": true, 00:28:02.290 "data_offset": 2048, 00:28:02.290 "data_size": 63488 00:28:02.290 }, 00:28:02.290 { 00:28:02.290 "name": "BaseBdev2", 00:28:02.290 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:02.290 "is_configured": true, 00:28:02.290 "data_offset": 2048, 00:28:02.290 "data_size": 63488 00:28:02.290 }, 00:28:02.290 { 00:28:02.290 "name": "BaseBdev3", 00:28:02.290 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:02.290 "is_configured": true, 00:28:02.290 "data_offset": 2048, 00:28:02.290 "data_size": 63488 00:28:02.290 } 00:28:02.290 ] 00:28:02.290 }' 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:02.290 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=837 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:02.290 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:02.547 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.547 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.547 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.547 "name": "raid_bdev1", 00:28:02.547 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:02.547 "strip_size_kb": 64, 00:28:02.547 "state": "online", 00:28:02.547 "raid_level": "raid5f", 00:28:02.547 "superblock": true, 00:28:02.547 "num_base_bdevs": 3, 00:28:02.547 "num_base_bdevs_discovered": 3, 00:28:02.547 "num_base_bdevs_operational": 3, 00:28:02.547 "process": { 00:28:02.547 "type": "rebuild", 00:28:02.547 "target": "spare", 00:28:02.547 "progress": { 00:28:02.547 "blocks": 28672, 00:28:02.547 "percent": 22 00:28:02.547 } 00:28:02.547 }, 00:28:02.547 "base_bdevs_list": [ 00:28:02.547 { 00:28:02.547 "name": "spare", 00:28:02.547 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:02.547 "is_configured": true, 00:28:02.547 "data_offset": 2048, 00:28:02.547 "data_size": 63488 00:28:02.547 }, 00:28:02.547 { 00:28:02.547 "name": "BaseBdev2", 00:28:02.547 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:02.547 "is_configured": true, 00:28:02.547 "data_offset": 2048, 00:28:02.547 "data_size": 63488 00:28:02.547 }, 00:28:02.547 { 00:28:02.547 "name": "BaseBdev3", 00:28:02.547 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:02.547 "is_configured": true, 00:28:02.547 "data_offset": 2048, 00:28:02.547 "data_size": 63488 00:28:02.547 } 00:28:02.547 ] 00:28:02.547 }' 00:28:02.547 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.547 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:02.547 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.547 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:02.547 15:21:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:03.921 15:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:03.921 15:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:03.921 15:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:03.921 15:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:03.921 15:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:03.921 15:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:03.921 15:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.921 15:21:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.921 15:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:03.921 "name": "raid_bdev1", 00:28:03.921 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:03.921 "strip_size_kb": 64, 00:28:03.921 "state": "online", 00:28:03.921 "raid_level": "raid5f", 00:28:03.921 "superblock": true, 00:28:03.921 "num_base_bdevs": 3, 00:28:03.921 "num_base_bdevs_discovered": 3, 00:28:03.921 "num_base_bdevs_operational": 3, 00:28:03.921 "process": { 00:28:03.921 "type": "rebuild", 00:28:03.921 "target": "spare", 00:28:03.921 "progress": { 00:28:03.921 "blocks": 53248, 00:28:03.921 "percent": 41 00:28:03.921 } 00:28:03.921 }, 00:28:03.921 "base_bdevs_list": [ 00:28:03.921 { 00:28:03.921 "name": "spare", 00:28:03.921 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:03.921 "is_configured": true, 00:28:03.921 "data_offset": 2048, 00:28:03.921 "data_size": 63488 00:28:03.921 }, 00:28:03.921 { 00:28:03.921 "name": "BaseBdev2", 00:28:03.921 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:03.921 "is_configured": true, 00:28:03.921 "data_offset": 2048, 00:28:03.921 "data_size": 63488 00:28:03.921 }, 00:28:03.921 { 00:28:03.921 "name": "BaseBdev3", 00:28:03.921 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:03.921 "is_configured": true, 00:28:03.921 "data_offset": 2048, 00:28:03.921 "data_size": 63488 00:28:03.921 } 00:28:03.921 ] 00:28:03.921 }' 00:28:03.921 15:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:03.921 15:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:03.921 15:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:03.921 15:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:03.921 15:21:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:04.854 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:04.854 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:04.854 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:04.854 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:04.854 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:04.854 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:04.854 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.854 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.143 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:05.143 "name": "raid_bdev1", 00:28:05.143 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:05.143 "strip_size_kb": 64, 00:28:05.144 "state": "online", 00:28:05.144 "raid_level": "raid5f", 00:28:05.144 "superblock": true, 00:28:05.144 "num_base_bdevs": 3, 00:28:05.144 "num_base_bdevs_discovered": 3, 00:28:05.144 "num_base_bdevs_operational": 3, 00:28:05.144 "process": { 00:28:05.144 "type": "rebuild", 00:28:05.144 "target": "spare", 00:28:05.144 "progress": { 00:28:05.144 "blocks": 77824, 00:28:05.144 "percent": 61 00:28:05.144 } 00:28:05.144 }, 00:28:05.144 "base_bdevs_list": [ 00:28:05.144 { 00:28:05.144 "name": "spare", 00:28:05.144 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:05.144 "is_configured": true, 00:28:05.144 "data_offset": 2048, 00:28:05.144 "data_size": 63488 00:28:05.144 }, 00:28:05.144 { 00:28:05.144 "name": "BaseBdev2", 00:28:05.144 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:05.144 "is_configured": true, 00:28:05.144 "data_offset": 2048, 00:28:05.144 "data_size": 63488 00:28:05.144 }, 00:28:05.144 { 00:28:05.144 "name": "BaseBdev3", 00:28:05.144 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:05.144 "is_configured": true, 00:28:05.144 "data_offset": 2048, 00:28:05.144 "data_size": 63488 00:28:05.144 } 00:28:05.144 ] 00:28:05.144 }' 00:28:05.144 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:05.144 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:05.144 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:05.144 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:05.144 15:22:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:06.081 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:06.081 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:06.081 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:06.081 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:06.081 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:06.081 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:06.081 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.081 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.339 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:06.339 "name": "raid_bdev1", 00:28:06.339 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:06.339 "strip_size_kb": 64, 00:28:06.339 "state": "online", 00:28:06.339 "raid_level": "raid5f", 00:28:06.339 "superblock": true, 00:28:06.339 "num_base_bdevs": 3, 00:28:06.339 "num_base_bdevs_discovered": 3, 00:28:06.339 "num_base_bdevs_operational": 3, 00:28:06.339 "process": { 00:28:06.339 "type": "rebuild", 00:28:06.339 "target": "spare", 00:28:06.339 "progress": { 00:28:06.339 "blocks": 104448, 00:28:06.339 "percent": 82 00:28:06.339 } 00:28:06.339 }, 00:28:06.339 "base_bdevs_list": [ 00:28:06.339 { 00:28:06.339 "name": "spare", 00:28:06.339 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:06.339 "is_configured": true, 00:28:06.339 "data_offset": 2048, 00:28:06.339 "data_size": 63488 00:28:06.339 }, 00:28:06.339 { 00:28:06.339 "name": "BaseBdev2", 00:28:06.339 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:06.339 "is_configured": true, 00:28:06.339 "data_offset": 2048, 00:28:06.339 "data_size": 63488 00:28:06.339 }, 00:28:06.339 { 00:28:06.339 "name": "BaseBdev3", 00:28:06.339 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:06.339 "is_configured": true, 00:28:06.339 "data_offset": 2048, 00:28:06.339 "data_size": 63488 00:28:06.339 } 00:28:06.339 ] 00:28:06.339 }' 00:28:06.339 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:06.339 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:06.339 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:06.339 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:06.339 15:22:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:07.274 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:07.274 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:07.274 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:07.274 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:07.274 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:07.274 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:07.274 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.274 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.274 [2024-07-23 15:22:02.683232] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:07.274 [2024-07-23 15:22:02.683325] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:07.274 [2024-07-23 15:22:02.683470] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:07.532 "name": "raid_bdev1", 00:28:07.532 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:07.532 "strip_size_kb": 64, 00:28:07.532 "state": "online", 00:28:07.532 "raid_level": "raid5f", 00:28:07.532 "superblock": true, 00:28:07.532 "num_base_bdevs": 3, 00:28:07.532 "num_base_bdevs_discovered": 3, 00:28:07.532 "num_base_bdevs_operational": 3, 00:28:07.532 "base_bdevs_list": [ 00:28:07.532 { 00:28:07.532 "name": "spare", 00:28:07.532 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:07.532 "is_configured": true, 00:28:07.532 "data_offset": 2048, 00:28:07.532 "data_size": 63488 00:28:07.532 }, 00:28:07.532 { 00:28:07.532 "name": "BaseBdev2", 00:28:07.532 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:07.532 "is_configured": true, 00:28:07.532 "data_offset": 2048, 00:28:07.532 "data_size": 63488 00:28:07.532 }, 00:28:07.532 { 00:28:07.532 "name": "BaseBdev3", 00:28:07.532 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:07.532 "is_configured": true, 00:28:07.532 "data_offset": 2048, 00:28:07.532 "data_size": 63488 00:28:07.532 } 00:28:07.532 ] 00:28:07.532 }' 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.532 15:22:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.790 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:07.790 "name": "raid_bdev1", 00:28:07.790 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:07.790 "strip_size_kb": 64, 00:28:07.790 "state": "online", 00:28:07.790 "raid_level": "raid5f", 00:28:07.790 "superblock": true, 00:28:07.790 "num_base_bdevs": 3, 00:28:07.790 "num_base_bdevs_discovered": 3, 00:28:07.790 "num_base_bdevs_operational": 3, 00:28:07.790 "base_bdevs_list": [ 00:28:07.790 { 00:28:07.790 "name": "spare", 00:28:07.790 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:07.790 "is_configured": true, 00:28:07.790 "data_offset": 2048, 00:28:07.790 "data_size": 63488 00:28:07.790 }, 00:28:07.790 { 00:28:07.790 "name": "BaseBdev2", 00:28:07.790 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:07.790 "is_configured": true, 00:28:07.790 "data_offset": 2048, 00:28:07.790 "data_size": 63488 00:28:07.790 }, 00:28:07.790 { 00:28:07.790 "name": "BaseBdev3", 00:28:07.790 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:07.790 "is_configured": true, 00:28:07.790 "data_offset": 2048, 00:28:07.790 "data_size": 63488 00:28:07.790 } 00:28:07.790 ] 00:28:07.790 }' 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.791 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.051 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.051 "name": "raid_bdev1", 00:28:08.051 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:08.051 "strip_size_kb": 64, 00:28:08.051 "state": "online", 00:28:08.051 "raid_level": "raid5f", 00:28:08.051 "superblock": true, 00:28:08.051 "num_base_bdevs": 3, 00:28:08.051 "num_base_bdevs_discovered": 3, 00:28:08.051 "num_base_bdevs_operational": 3, 00:28:08.051 "base_bdevs_list": [ 00:28:08.051 { 00:28:08.051 "name": "spare", 00:28:08.051 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:08.051 "is_configured": true, 00:28:08.051 "data_offset": 2048, 00:28:08.051 "data_size": 63488 00:28:08.051 }, 00:28:08.051 { 00:28:08.051 "name": "BaseBdev2", 00:28:08.051 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:08.051 "is_configured": true, 00:28:08.051 "data_offset": 2048, 00:28:08.051 "data_size": 63488 00:28:08.051 }, 00:28:08.051 { 00:28:08.051 "name": "BaseBdev3", 00:28:08.051 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:08.051 "is_configured": true, 00:28:08.051 "data_offset": 2048, 00:28:08.051 "data_size": 63488 00:28:08.051 } 00:28:08.051 ] 00:28:08.051 }' 00:28:08.051 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.051 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.616 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:08.616 [2024-07-23 15:22:03.965895] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:08.616 [2024-07-23 15:22:03.966154] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:08.616 [2024-07-23 15:22:03.966366] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:08.616 [2024-07-23 15:22:03.966548] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:08.616 [2024-07-23 15:22:03.966568] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:28:08.616 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.616 15:22:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:08.874 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:09.133 /dev/nbd0 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:09.133 1+0 records in 00:28:09.133 1+0 records out 00:28:09.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216011 s, 19.0 MB/s 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:09.133 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:09.391 /dev/nbd1 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:09.391 1+0 records in 00:28:09.391 1+0 records out 00:28:09.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268716 s, 15.2 MB/s 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:09.391 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:09.648 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:09.648 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:09.648 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:09.648 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:09.648 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:09.648 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:09.648 15:22:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:09.907 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:10.287 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:10.545 [2024-07-23 15:22:05.790847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:10.545 [2024-07-23 15:22:05.790927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.545 [2024-07-23 15:22:05.790956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:28:10.545 [2024-07-23 15:22:05.790969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.545 [2024-07-23 15:22:05.793501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.545 [2024-07-23 15:22:05.793551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:10.545 [2024-07-23 15:22:05.793636] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:10.545 [2024-07-23 15:22:05.793684] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:10.545 [2024-07-23 15:22:05.793972] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:10.545 [2024-07-23 15:22:05.794180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:10.545 spare 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.545 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.545 [2024-07-23 15:22:05.894383] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009f80 00:28:10.545 [2024-07-23 15:22:05.894634] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:28:10.545 [2024-07-23 15:22:05.894835] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000043940 00:28:10.545 [2024-07-23 15:22:05.895583] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009f80 00:28:10.545 [2024-07-23 15:22:05.895707] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009f80 00:28:10.545 [2024-07-23 15:22:05.895894] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.803 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.803 "name": "raid_bdev1", 00:28:10.803 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:10.803 "strip_size_kb": 64, 00:28:10.803 "state": "online", 00:28:10.803 "raid_level": "raid5f", 00:28:10.803 "superblock": true, 00:28:10.803 "num_base_bdevs": 3, 00:28:10.803 "num_base_bdevs_discovered": 3, 00:28:10.803 "num_base_bdevs_operational": 3, 00:28:10.803 "base_bdevs_list": [ 00:28:10.803 { 00:28:10.803 "name": "spare", 00:28:10.803 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:10.803 "is_configured": true, 00:28:10.803 "data_offset": 2048, 00:28:10.803 "data_size": 63488 00:28:10.803 }, 00:28:10.803 { 00:28:10.803 "name": "BaseBdev2", 00:28:10.803 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:10.803 "is_configured": true, 00:28:10.803 "data_offset": 2048, 00:28:10.803 "data_size": 63488 00:28:10.803 }, 00:28:10.803 { 00:28:10.803 "name": "BaseBdev3", 00:28:10.803 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:10.803 "is_configured": true, 00:28:10.803 "data_offset": 2048, 00:28:10.803 "data_size": 63488 00:28:10.803 } 00:28:10.803 ] 00:28:10.803 }' 00:28:10.803 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.803 15:22:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.061 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:11.061 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:11.061 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:11.061 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:11.061 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:11.061 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.061 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:11.320 "name": "raid_bdev1", 00:28:11.320 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:11.320 "strip_size_kb": 64, 00:28:11.320 "state": "online", 00:28:11.320 "raid_level": "raid5f", 00:28:11.320 "superblock": true, 00:28:11.320 "num_base_bdevs": 3, 00:28:11.320 "num_base_bdevs_discovered": 3, 00:28:11.320 "num_base_bdevs_operational": 3, 00:28:11.320 "base_bdevs_list": [ 00:28:11.320 { 00:28:11.320 "name": "spare", 00:28:11.320 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:11.320 "is_configured": true, 00:28:11.320 "data_offset": 2048, 00:28:11.320 "data_size": 63488 00:28:11.320 }, 00:28:11.320 { 00:28:11.320 "name": "BaseBdev2", 00:28:11.320 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:11.320 "is_configured": true, 00:28:11.320 "data_offset": 2048, 00:28:11.320 "data_size": 63488 00:28:11.320 }, 00:28:11.320 { 00:28:11.320 "name": "BaseBdev3", 00:28:11.320 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:11.320 "is_configured": true, 00:28:11.320 "data_offset": 2048, 00:28:11.320 "data_size": 63488 00:28:11.320 } 00:28:11.320 ] 00:28:11.320 }' 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:11.320 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:11.578 [2024-07-23 15:22:06.952216] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.578 15:22:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.837 15:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:11.837 "name": "raid_bdev1", 00:28:11.837 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:11.837 "strip_size_kb": 64, 00:28:11.837 "state": "online", 00:28:11.837 "raid_level": "raid5f", 00:28:11.837 "superblock": true, 00:28:11.837 "num_base_bdevs": 3, 00:28:11.837 "num_base_bdevs_discovered": 2, 00:28:11.837 "num_base_bdevs_operational": 2, 00:28:11.837 "base_bdevs_list": [ 00:28:11.837 { 00:28:11.837 "name": null, 00:28:11.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.837 "is_configured": false, 00:28:11.837 "data_offset": 2048, 00:28:11.837 "data_size": 63488 00:28:11.837 }, 00:28:11.837 { 00:28:11.837 "name": "BaseBdev2", 00:28:11.837 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:11.837 "is_configured": true, 00:28:11.837 "data_offset": 2048, 00:28:11.837 "data_size": 63488 00:28:11.837 }, 00:28:11.837 { 00:28:11.837 "name": "BaseBdev3", 00:28:11.837 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:11.837 "is_configured": true, 00:28:11.837 "data_offset": 2048, 00:28:11.837 "data_size": 63488 00:28:11.837 } 00:28:11.837 ] 00:28:11.837 }' 00:28:11.837 15:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:11.837 15:22:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:12.095 15:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:12.354 [2024-07-23 15:22:07.664391] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:12.354 [2024-07-23 15:22:07.664814] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:12.354 [2024-07-23 15:22:07.664960] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:12.354 [2024-07-23 15:22:07.665026] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:12.354 [2024-07-23 15:22:07.668827] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000043a10 00:28:12.354 [2024-07-23 15:22:07.671370] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:12.354 15:22:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:13.289 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:13.289 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:13.289 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:13.289 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:13.289 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:13.289 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.289 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.548 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:13.548 "name": "raid_bdev1", 00:28:13.548 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:13.548 "strip_size_kb": 64, 00:28:13.548 "state": "online", 00:28:13.548 "raid_level": "raid5f", 00:28:13.548 "superblock": true, 00:28:13.548 "num_base_bdevs": 3, 00:28:13.548 "num_base_bdevs_discovered": 3, 00:28:13.548 "num_base_bdevs_operational": 3, 00:28:13.548 "process": { 00:28:13.548 "type": "rebuild", 00:28:13.548 "target": "spare", 00:28:13.548 "progress": { 00:28:13.548 "blocks": 24576, 00:28:13.548 "percent": 19 00:28:13.548 } 00:28:13.548 }, 00:28:13.548 "base_bdevs_list": [ 00:28:13.548 { 00:28:13.548 "name": "spare", 00:28:13.548 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:13.548 "is_configured": true, 00:28:13.548 "data_offset": 2048, 00:28:13.548 "data_size": 63488 00:28:13.548 }, 00:28:13.548 { 00:28:13.548 "name": "BaseBdev2", 00:28:13.548 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:13.549 "is_configured": true, 00:28:13.549 "data_offset": 2048, 00:28:13.549 "data_size": 63488 00:28:13.549 }, 00:28:13.549 { 00:28:13.549 "name": "BaseBdev3", 00:28:13.549 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:13.549 "is_configured": true, 00:28:13.549 "data_offset": 2048, 00:28:13.549 "data_size": 63488 00:28:13.549 } 00:28:13.549 ] 00:28:13.549 }' 00:28:13.549 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:13.549 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:13.549 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:13.549 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:13.549 15:22:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:13.807 [2024-07-23 15:22:09.114183] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:13.807 [2024-07-23 15:22:09.184813] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:13.807 [2024-07-23 15:22:09.184886] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.807 [2024-07-23 15:22:09.184908] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:13.807 [2024-07-23 15:22:09.184917] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.807 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.065 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:14.065 "name": "raid_bdev1", 00:28:14.066 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:14.066 "strip_size_kb": 64, 00:28:14.066 "state": "online", 00:28:14.066 "raid_level": "raid5f", 00:28:14.066 "superblock": true, 00:28:14.066 "num_base_bdevs": 3, 00:28:14.066 "num_base_bdevs_discovered": 2, 00:28:14.066 "num_base_bdevs_operational": 2, 00:28:14.066 "base_bdevs_list": [ 00:28:14.066 { 00:28:14.066 "name": null, 00:28:14.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.066 "is_configured": false, 00:28:14.066 "data_offset": 2048, 00:28:14.066 "data_size": 63488 00:28:14.066 }, 00:28:14.066 { 00:28:14.066 "name": "BaseBdev2", 00:28:14.066 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:14.066 "is_configured": true, 00:28:14.066 "data_offset": 2048, 00:28:14.066 "data_size": 63488 00:28:14.066 }, 00:28:14.066 { 00:28:14.066 "name": "BaseBdev3", 00:28:14.066 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:14.066 "is_configured": true, 00:28:14.066 "data_offset": 2048, 00:28:14.066 "data_size": 63488 00:28:14.066 } 00:28:14.066 ] 00:28:14.066 }' 00:28:14.066 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:14.066 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.632 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:14.632 [2024-07-23 15:22:09.946817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:14.632 [2024-07-23 15:22:09.947086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.632 [2024-07-23 15:22:09.947156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:28:14.632 [2024-07-23 15:22:09.947279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.632 [2024-07-23 15:22:09.947756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.632 [2024-07-23 15:22:09.947904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:14.632 [2024-07-23 15:22:09.948009] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:14.633 [2024-07-23 15:22:09.948023] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:14.633 [2024-07-23 15:22:09.948039] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:14.633 [2024-07-23 15:22:09.948070] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:14.633 [2024-07-23 15:22:09.952004] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000043ae0 00:28:14.633 spare 00:28:14.633 [2024-07-23 15:22:09.954568] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:14.633 15:22:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:15.565 15:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.565 15:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:15.565 15:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:15.565 15:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:15.565 15:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:15.565 15:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.565 15:22:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.823 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.823 "name": "raid_bdev1", 00:28:15.823 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:15.823 "strip_size_kb": 64, 00:28:15.823 "state": "online", 00:28:15.823 "raid_level": "raid5f", 00:28:15.823 "superblock": true, 00:28:15.823 "num_base_bdevs": 3, 00:28:15.823 "num_base_bdevs_discovered": 3, 00:28:15.823 "num_base_bdevs_operational": 3, 00:28:15.823 "process": { 00:28:15.823 "type": "rebuild", 00:28:15.823 "target": "spare", 00:28:15.823 "progress": { 00:28:15.823 "blocks": 24576, 00:28:15.823 "percent": 19 00:28:15.823 } 00:28:15.823 }, 00:28:15.823 "base_bdevs_list": [ 00:28:15.823 { 00:28:15.823 "name": "spare", 00:28:15.823 "uuid": "30b36982-7f3d-5186-a012-1eeeab799a48", 00:28:15.823 "is_configured": true, 00:28:15.823 "data_offset": 2048, 00:28:15.823 "data_size": 63488 00:28:15.823 }, 00:28:15.823 { 00:28:15.823 "name": "BaseBdev2", 00:28:15.823 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:15.823 "is_configured": true, 00:28:15.823 "data_offset": 2048, 00:28:15.823 "data_size": 63488 00:28:15.823 }, 00:28:15.823 { 00:28:15.823 "name": "BaseBdev3", 00:28:15.823 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:15.823 "is_configured": true, 00:28:15.823 "data_offset": 2048, 00:28:15.823 "data_size": 63488 00:28:15.823 } 00:28:15.823 ] 00:28:15.823 }' 00:28:15.823 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:15.823 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:15.823 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:15.823 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.823 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:16.080 [2024-07-23 15:22:11.481105] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:16.338 [2024-07-23 15:22:11.569237] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:16.338 [2024-07-23 15:22:11.569448] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.338 [2024-07-23 15:22:11.569543] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:16.338 [2024-07-23 15:22:11.569586] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.338 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.596 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.596 "name": "raid_bdev1", 00:28:16.596 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:16.596 "strip_size_kb": 64, 00:28:16.596 "state": "online", 00:28:16.596 "raid_level": "raid5f", 00:28:16.596 "superblock": true, 00:28:16.596 "num_base_bdevs": 3, 00:28:16.596 "num_base_bdevs_discovered": 2, 00:28:16.596 "num_base_bdevs_operational": 2, 00:28:16.596 "base_bdevs_list": [ 00:28:16.596 { 00:28:16.596 "name": null, 00:28:16.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.596 "is_configured": false, 00:28:16.596 "data_offset": 2048, 00:28:16.596 "data_size": 63488 00:28:16.596 }, 00:28:16.596 { 00:28:16.596 "name": "BaseBdev2", 00:28:16.596 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:16.596 "is_configured": true, 00:28:16.596 "data_offset": 2048, 00:28:16.596 "data_size": 63488 00:28:16.596 }, 00:28:16.596 { 00:28:16.596 "name": "BaseBdev3", 00:28:16.596 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:16.596 "is_configured": true, 00:28:16.596 "data_offset": 2048, 00:28:16.596 "data_size": 63488 00:28:16.596 } 00:28:16.596 ] 00:28:16.596 }' 00:28:16.596 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.596 15:22:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.854 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:16.854 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:16.854 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:16.854 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:16.854 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:16.854 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.854 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.112 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:17.112 "name": "raid_bdev1", 00:28:17.112 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:17.112 "strip_size_kb": 64, 00:28:17.112 "state": "online", 00:28:17.112 "raid_level": "raid5f", 00:28:17.112 "superblock": true, 00:28:17.112 "num_base_bdevs": 3, 00:28:17.112 "num_base_bdevs_discovered": 2, 00:28:17.112 "num_base_bdevs_operational": 2, 00:28:17.112 "base_bdevs_list": [ 00:28:17.112 { 00:28:17.112 "name": null, 00:28:17.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.112 "is_configured": false, 00:28:17.112 "data_offset": 2048, 00:28:17.112 "data_size": 63488 00:28:17.112 }, 00:28:17.112 { 00:28:17.112 "name": "BaseBdev2", 00:28:17.112 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:17.112 "is_configured": true, 00:28:17.112 "data_offset": 2048, 00:28:17.112 "data_size": 63488 00:28:17.112 }, 00:28:17.112 { 00:28:17.112 "name": "BaseBdev3", 00:28:17.112 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:17.112 "is_configured": true, 00:28:17.112 "data_offset": 2048, 00:28:17.112 "data_size": 63488 00:28:17.112 } 00:28:17.112 ] 00:28:17.112 }' 00:28:17.112 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:17.112 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:17.112 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:17.112 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:17.112 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:17.371 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:17.371 [2024-07-23 15:22:12.783700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:17.371 [2024-07-23 15:22:12.784005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:17.371 [2024-07-23 15:22:12.784050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:28:17.371 [2024-07-23 15:22:12.784068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:17.371 [2024-07-23 15:22:12.784473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:17.371 [2024-07-23 15:22:12.784496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:17.371 [2024-07-23 15:22:12.784571] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:17.371 [2024-07-23 15:22:12.784588] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:17.371 [2024-07-23 15:22:12.784598] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:17.371 BaseBdev1 00:28:17.629 15:22:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.562 15:22:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.819 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:18.819 "name": "raid_bdev1", 00:28:18.819 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:18.819 "strip_size_kb": 64, 00:28:18.819 "state": "online", 00:28:18.819 "raid_level": "raid5f", 00:28:18.819 "superblock": true, 00:28:18.819 "num_base_bdevs": 3, 00:28:18.819 "num_base_bdevs_discovered": 2, 00:28:18.819 "num_base_bdevs_operational": 2, 00:28:18.820 "base_bdevs_list": [ 00:28:18.820 { 00:28:18.820 "name": null, 00:28:18.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.820 "is_configured": false, 00:28:18.820 "data_offset": 2048, 00:28:18.820 "data_size": 63488 00:28:18.820 }, 00:28:18.820 { 00:28:18.820 "name": "BaseBdev2", 00:28:18.820 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:18.820 "is_configured": true, 00:28:18.820 "data_offset": 2048, 00:28:18.820 "data_size": 63488 00:28:18.820 }, 00:28:18.820 { 00:28:18.820 "name": "BaseBdev3", 00:28:18.820 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:18.820 "is_configured": true, 00:28:18.820 "data_offset": 2048, 00:28:18.820 "data_size": 63488 00:28:18.820 } 00:28:18.820 ] 00:28:18.820 }' 00:28:18.820 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:18.820 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:19.076 "name": "raid_bdev1", 00:28:19.076 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:19.076 "strip_size_kb": 64, 00:28:19.076 "state": "online", 00:28:19.076 "raid_level": "raid5f", 00:28:19.076 "superblock": true, 00:28:19.076 "num_base_bdevs": 3, 00:28:19.076 "num_base_bdevs_discovered": 2, 00:28:19.076 "num_base_bdevs_operational": 2, 00:28:19.076 "base_bdevs_list": [ 00:28:19.076 { 00:28:19.076 "name": null, 00:28:19.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.076 "is_configured": false, 00:28:19.076 "data_offset": 2048, 00:28:19.076 "data_size": 63488 00:28:19.076 }, 00:28:19.076 { 00:28:19.076 "name": "BaseBdev2", 00:28:19.076 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:19.076 "is_configured": true, 00:28:19.076 "data_offset": 2048, 00:28:19.076 "data_size": 63488 00:28:19.076 }, 00:28:19.076 { 00:28:19.076 "name": "BaseBdev3", 00:28:19.076 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:19.076 "is_configured": true, 00:28:19.076 "data_offset": 2048, 00:28:19.076 "data_size": 63488 00:28:19.076 } 00:28:19.076 ] 00:28:19.076 }' 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:19.076 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:19.336 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:19.634 [2024-07-23 15:22:14.768223] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:19.634 [2024-07-23 15:22:14.768651] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:19.634 [2024-07-23 15:22:14.768691] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:19.634 request: 00:28:19.634 { 00:28:19.634 "base_bdev": "BaseBdev1", 00:28:19.634 "raid_bdev": "raid_bdev1", 00:28:19.634 "method": "bdev_raid_add_base_bdev", 00:28:19.634 "req_id": 1 00:28:19.634 } 00:28:19.634 Got JSON-RPC error response 00:28:19.634 response: 00:28:19.634 { 00:28:19.634 "code": -22, 00:28:19.634 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:19.634 } 00:28:19.634 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:28:19.634 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:19.634 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:19.634 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:19.634 15:22:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.568 15:22:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.826 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:20.826 "name": "raid_bdev1", 00:28:20.826 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:20.826 "strip_size_kb": 64, 00:28:20.826 "state": "online", 00:28:20.826 "raid_level": "raid5f", 00:28:20.826 "superblock": true, 00:28:20.826 "num_base_bdevs": 3, 00:28:20.826 "num_base_bdevs_discovered": 2, 00:28:20.826 "num_base_bdevs_operational": 2, 00:28:20.826 "base_bdevs_list": [ 00:28:20.826 { 00:28:20.826 "name": null, 00:28:20.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:20.826 "is_configured": false, 00:28:20.826 "data_offset": 2048, 00:28:20.826 "data_size": 63488 00:28:20.826 }, 00:28:20.826 { 00:28:20.826 "name": "BaseBdev2", 00:28:20.826 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:20.826 "is_configured": true, 00:28:20.826 "data_offset": 2048, 00:28:20.826 "data_size": 63488 00:28:20.826 }, 00:28:20.826 { 00:28:20.826 "name": "BaseBdev3", 00:28:20.826 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:20.826 "is_configured": true, 00:28:20.826 "data_offset": 2048, 00:28:20.826 "data_size": 63488 00:28:20.826 } 00:28:20.826 ] 00:28:20.826 }' 00:28:20.826 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:20.826 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.083 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:21.083 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:21.083 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:21.083 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:21.083 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:21.083 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.083 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:21.341 "name": "raid_bdev1", 00:28:21.341 "uuid": "4cbc462c-00fa-4df2-b593-fedc477822d7", 00:28:21.341 "strip_size_kb": 64, 00:28:21.341 "state": "online", 00:28:21.341 "raid_level": "raid5f", 00:28:21.341 "superblock": true, 00:28:21.341 "num_base_bdevs": 3, 00:28:21.341 "num_base_bdevs_discovered": 2, 00:28:21.341 "num_base_bdevs_operational": 2, 00:28:21.341 "base_bdevs_list": [ 00:28:21.341 { 00:28:21.341 "name": null, 00:28:21.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.341 "is_configured": false, 00:28:21.341 "data_offset": 2048, 00:28:21.341 "data_size": 63488 00:28:21.341 }, 00:28:21.341 { 00:28:21.341 "name": "BaseBdev2", 00:28:21.341 "uuid": "ecb8e7eb-e79c-5daa-a479-0a8e6e11e86f", 00:28:21.341 "is_configured": true, 00:28:21.341 "data_offset": 2048, 00:28:21.341 "data_size": 63488 00:28:21.341 }, 00:28:21.341 { 00:28:21.341 "name": "BaseBdev3", 00:28:21.341 "uuid": "56971c64-7d52-5ecc-a69e-26d40a277399", 00:28:21.341 "is_configured": true, 00:28:21.341 "data_offset": 2048, 00:28:21.341 "data_size": 63488 00:28:21.341 } 00:28:21.341 ] 00:28:21.341 }' 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 115318 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 115318 ']' 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 115318 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115318 00:28:21.341 killing process with pid 115318 00:28:21.341 Received shutdown signal, test time was about 60.000000 seconds 00:28:21.341 00:28:21.341 Latency(us) 00:28:21.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.341 =================================================================================================================== 00:28:21.341 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115318' 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 115318 00:28:21.341 [2024-07-23 15:22:16.715976] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:21.341 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 115318 00:28:21.341 [2024-07-23 15:22:16.716111] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:21.341 [2024-07-23 15:22:16.716180] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:21.341 [2024-07-23 15:22:16.716195] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009f80 name raid_bdev1, state offline 00:28:21.341 [2024-07-23 15:22:16.757988] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:21.599 ************************************ 00:28:21.599 END TEST raid5f_rebuild_test_sb 00:28:21.599 ************************************ 00:28:21.599 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:28:21.599 00:28:21.599 real 0m28.501s 00:28:21.599 user 0m41.109s 00:28:21.599 sys 0m4.628s 00:28:21.599 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:21.599 15:22:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.856 15:22:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:21.856 15:22:17 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:28:21.856 15:22:17 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:28:21.856 15:22:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:28:21.856 15:22:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.856 15:22:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:21.857 ************************************ 00:28:21.857 START TEST raid5f_state_function_test 00:28:21.857 ************************************ 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 false 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=116150 00:28:21.857 Process raid pid: 116150 00:28:21.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 116150' 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 116150 /var/tmp/spdk-raid.sock 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 116150 ']' 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.857 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.857 [2024-07-23 15:22:17.118915] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:28:21.857 [2024-07-23 15:22:17.119061] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.857 [2024-07-23 15:22:17.262436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.115 [2024-07-23 15:22:17.313365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.115 [2024-07-23 15:22:17.358818] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:22.680 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.680 15:22:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:28:22.680 15:22:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:22.938 [2024-07-23 15:22:18.220481] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:22.938 [2024-07-23 15:22:18.220552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:22.938 [2024-07-23 15:22:18.220564] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:22.938 [2024-07-23 15:22:18.220578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:22.938 [2024-07-23 15:22:18.220589] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:22.938 [2024-07-23 15:22:18.220602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:22.938 [2024-07-23 15:22:18.220610] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:22.938 [2024-07-23 15:22:18.220638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.938 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:23.196 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:23.196 "name": "Existed_Raid", 00:28:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.196 "strip_size_kb": 64, 00:28:23.196 "state": "configuring", 00:28:23.196 "raid_level": "raid5f", 00:28:23.196 "superblock": false, 00:28:23.196 "num_base_bdevs": 4, 00:28:23.196 "num_base_bdevs_discovered": 0, 00:28:23.196 "num_base_bdevs_operational": 4, 00:28:23.196 "base_bdevs_list": [ 00:28:23.196 { 00:28:23.196 "name": "BaseBdev1", 00:28:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.196 "is_configured": false, 00:28:23.196 "data_offset": 0, 00:28:23.196 "data_size": 0 00:28:23.196 }, 00:28:23.196 { 00:28:23.196 "name": "BaseBdev2", 00:28:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.196 "is_configured": false, 00:28:23.196 "data_offset": 0, 00:28:23.196 "data_size": 0 00:28:23.196 }, 00:28:23.196 { 00:28:23.196 "name": "BaseBdev3", 00:28:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.196 "is_configured": false, 00:28:23.196 "data_offset": 0, 00:28:23.196 "data_size": 0 00:28:23.196 }, 00:28:23.196 { 00:28:23.196 "name": "BaseBdev4", 00:28:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.196 "is_configured": false, 00:28:23.196 "data_offset": 0, 00:28:23.196 "data_size": 0 00:28:23.196 } 00:28:23.196 ] 00:28:23.196 }' 00:28:23.196 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:23.196 15:22:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.454 15:22:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:23.711 [2024-07-23 15:22:18.980510] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:23.711 [2024-07-23 15:22:18.980801] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:28:23.711 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:23.969 [2024-07-23 15:22:19.248611] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:23.969 [2024-07-23 15:22:19.248678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:23.969 [2024-07-23 15:22:19.248690] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:23.969 [2024-07-23 15:22:19.248704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:23.969 [2024-07-23 15:22:19.248712] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:23.969 [2024-07-23 15:22:19.248725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:23.969 [2024-07-23 15:22:19.248733] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:23.969 [2024-07-23 15:22:19.248745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:23.969 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:24.226 [2024-07-23 15:22:19.438147] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:24.227 BaseBdev1 00:28:24.227 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:24.227 15:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:28:24.227 15:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:24.227 15:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:24.227 15:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:24.227 15:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:24.227 15:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:24.227 15:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:24.484 [ 00:28:24.484 { 00:28:24.484 "name": "BaseBdev1", 00:28:24.484 "aliases": [ 00:28:24.484 "df9d06cf-1c43-41f9-99fb-f439088cde43" 00:28:24.484 ], 00:28:24.484 "product_name": "Malloc disk", 00:28:24.484 "block_size": 512, 00:28:24.484 "num_blocks": 65536, 00:28:24.484 "uuid": "df9d06cf-1c43-41f9-99fb-f439088cde43", 00:28:24.484 "assigned_rate_limits": { 00:28:24.484 "rw_ios_per_sec": 0, 00:28:24.484 "rw_mbytes_per_sec": 0, 00:28:24.484 "r_mbytes_per_sec": 0, 00:28:24.484 "w_mbytes_per_sec": 0 00:28:24.484 }, 00:28:24.484 "claimed": true, 00:28:24.484 "claim_type": "exclusive_write", 00:28:24.484 "zoned": false, 00:28:24.484 "supported_io_types": { 00:28:24.484 "read": true, 00:28:24.484 "write": true, 00:28:24.484 "unmap": true, 00:28:24.484 "flush": true, 00:28:24.484 "reset": true, 00:28:24.484 "nvme_admin": false, 00:28:24.484 "nvme_io": false, 00:28:24.484 "nvme_io_md": false, 00:28:24.484 "write_zeroes": true, 00:28:24.484 "zcopy": true, 00:28:24.484 "get_zone_info": false, 00:28:24.484 "zone_management": false, 00:28:24.484 "zone_append": false, 00:28:24.484 "compare": false, 00:28:24.484 "compare_and_write": false, 00:28:24.484 "abort": true, 00:28:24.484 "seek_hole": false, 00:28:24.484 "seek_data": false, 00:28:24.484 "copy": true, 00:28:24.484 "nvme_iov_md": false 00:28:24.484 }, 00:28:24.484 "memory_domains": [ 00:28:24.484 { 00:28:24.484 "dma_device_id": "system", 00:28:24.484 "dma_device_type": 1 00:28:24.484 }, 00:28:24.484 { 00:28:24.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:24.484 "dma_device_type": 2 00:28:24.484 } 00:28:24.484 ], 00:28:24.484 "driver_specific": {} 00:28:24.484 } 00:28:24.484 ] 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.484 15:22:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:24.742 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.742 "name": "Existed_Raid", 00:28:24.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.742 "strip_size_kb": 64, 00:28:24.742 "state": "configuring", 00:28:24.742 "raid_level": "raid5f", 00:28:24.742 "superblock": false, 00:28:24.742 "num_base_bdevs": 4, 00:28:24.742 "num_base_bdevs_discovered": 1, 00:28:24.742 "num_base_bdevs_operational": 4, 00:28:24.742 "base_bdevs_list": [ 00:28:24.742 { 00:28:24.742 "name": "BaseBdev1", 00:28:24.742 "uuid": "df9d06cf-1c43-41f9-99fb-f439088cde43", 00:28:24.742 "is_configured": true, 00:28:24.742 "data_offset": 0, 00:28:24.742 "data_size": 65536 00:28:24.742 }, 00:28:24.742 { 00:28:24.742 "name": "BaseBdev2", 00:28:24.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.742 "is_configured": false, 00:28:24.742 "data_offset": 0, 00:28:24.742 "data_size": 0 00:28:24.742 }, 00:28:24.742 { 00:28:24.742 "name": "BaseBdev3", 00:28:24.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.742 "is_configured": false, 00:28:24.742 "data_offset": 0, 00:28:24.742 "data_size": 0 00:28:24.742 }, 00:28:24.742 { 00:28:24.742 "name": "BaseBdev4", 00:28:24.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.742 "is_configured": false, 00:28:24.742 "data_offset": 0, 00:28:24.742 "data_size": 0 00:28:24.742 } 00:28:24.742 ] 00:28:24.742 }' 00:28:24.742 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.742 15:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.014 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:25.293 [2024-07-23 15:22:20.502469] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:25.293 [2024-07-23 15:22:20.502546] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:25.293 [2024-07-23 15:22:20.674602] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:25.293 [2024-07-23 15:22:20.676891] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:25.293 [2024-07-23 15:22:20.676960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:25.293 [2024-07-23 15:22:20.676971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:25.293 [2024-07-23 15:22:20.676985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:25.293 [2024-07-23 15:22:20.676993] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:25.293 [2024-07-23 15:22:20.677006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.293 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:25.551 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:25.551 "name": "Existed_Raid", 00:28:25.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.551 "strip_size_kb": 64, 00:28:25.551 "state": "configuring", 00:28:25.551 "raid_level": "raid5f", 00:28:25.551 "superblock": false, 00:28:25.551 "num_base_bdevs": 4, 00:28:25.551 "num_base_bdevs_discovered": 1, 00:28:25.551 "num_base_bdevs_operational": 4, 00:28:25.551 "base_bdevs_list": [ 00:28:25.551 { 00:28:25.551 "name": "BaseBdev1", 00:28:25.551 "uuid": "df9d06cf-1c43-41f9-99fb-f439088cde43", 00:28:25.551 "is_configured": true, 00:28:25.551 "data_offset": 0, 00:28:25.551 "data_size": 65536 00:28:25.551 }, 00:28:25.551 { 00:28:25.551 "name": "BaseBdev2", 00:28:25.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.551 "is_configured": false, 00:28:25.551 "data_offset": 0, 00:28:25.551 "data_size": 0 00:28:25.551 }, 00:28:25.551 { 00:28:25.551 "name": "BaseBdev3", 00:28:25.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.551 "is_configured": false, 00:28:25.551 "data_offset": 0, 00:28:25.551 "data_size": 0 00:28:25.551 }, 00:28:25.551 { 00:28:25.551 "name": "BaseBdev4", 00:28:25.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.551 "is_configured": false, 00:28:25.551 "data_offset": 0, 00:28:25.551 "data_size": 0 00:28:25.551 } 00:28:25.551 ] 00:28:25.551 }' 00:28:25.551 15:22:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:25.551 15:22:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.117 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:26.376 [2024-07-23 15:22:21.561380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:26.376 BaseBdev2 00:28:26.376 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:26.376 15:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:28:26.376 15:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:26.376 15:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:26.376 15:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:26.376 15:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:26.376 15:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:26.376 15:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:26.635 [ 00:28:26.635 { 00:28:26.635 "name": "BaseBdev2", 00:28:26.635 "aliases": [ 00:28:26.635 "02877be3-f520-4ca1-aa54-8e79a22eae5d" 00:28:26.635 ], 00:28:26.635 "product_name": "Malloc disk", 00:28:26.635 "block_size": 512, 00:28:26.635 "num_blocks": 65536, 00:28:26.635 "uuid": "02877be3-f520-4ca1-aa54-8e79a22eae5d", 00:28:26.635 "assigned_rate_limits": { 00:28:26.635 "rw_ios_per_sec": 0, 00:28:26.635 "rw_mbytes_per_sec": 0, 00:28:26.635 "r_mbytes_per_sec": 0, 00:28:26.635 "w_mbytes_per_sec": 0 00:28:26.635 }, 00:28:26.635 "claimed": true, 00:28:26.635 "claim_type": "exclusive_write", 00:28:26.635 "zoned": false, 00:28:26.635 "supported_io_types": { 00:28:26.635 "read": true, 00:28:26.635 "write": true, 00:28:26.635 "unmap": true, 00:28:26.635 "flush": true, 00:28:26.635 "reset": true, 00:28:26.635 "nvme_admin": false, 00:28:26.635 "nvme_io": false, 00:28:26.635 "nvme_io_md": false, 00:28:26.635 "write_zeroes": true, 00:28:26.635 "zcopy": true, 00:28:26.635 "get_zone_info": false, 00:28:26.635 "zone_management": false, 00:28:26.635 "zone_append": false, 00:28:26.635 "compare": false, 00:28:26.635 "compare_and_write": false, 00:28:26.635 "abort": true, 00:28:26.635 "seek_hole": false, 00:28:26.635 "seek_data": false, 00:28:26.635 "copy": true, 00:28:26.635 "nvme_iov_md": false 00:28:26.635 }, 00:28:26.635 "memory_domains": [ 00:28:26.635 { 00:28:26.635 "dma_device_id": "system", 00:28:26.635 "dma_device_type": 1 00:28:26.635 }, 00:28:26.635 { 00:28:26.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.636 "dma_device_type": 2 00:28:26.636 } 00:28:26.636 ], 00:28:26.636 "driver_specific": {} 00:28:26.636 } 00:28:26.636 ] 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:26.636 15:22:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.894 15:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.894 "name": "Existed_Raid", 00:28:26.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.894 "strip_size_kb": 64, 00:28:26.894 "state": "configuring", 00:28:26.894 "raid_level": "raid5f", 00:28:26.894 "superblock": false, 00:28:26.894 "num_base_bdevs": 4, 00:28:26.894 "num_base_bdevs_discovered": 2, 00:28:26.894 "num_base_bdevs_operational": 4, 00:28:26.894 "base_bdevs_list": [ 00:28:26.894 { 00:28:26.894 "name": "BaseBdev1", 00:28:26.894 "uuid": "df9d06cf-1c43-41f9-99fb-f439088cde43", 00:28:26.894 "is_configured": true, 00:28:26.894 "data_offset": 0, 00:28:26.894 "data_size": 65536 00:28:26.894 }, 00:28:26.894 { 00:28:26.894 "name": "BaseBdev2", 00:28:26.894 "uuid": "02877be3-f520-4ca1-aa54-8e79a22eae5d", 00:28:26.894 "is_configured": true, 00:28:26.894 "data_offset": 0, 00:28:26.894 "data_size": 65536 00:28:26.894 }, 00:28:26.894 { 00:28:26.894 "name": "BaseBdev3", 00:28:26.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.894 "is_configured": false, 00:28:26.894 "data_offset": 0, 00:28:26.894 "data_size": 0 00:28:26.894 }, 00:28:26.894 { 00:28:26.894 "name": "BaseBdev4", 00:28:26.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.894 "is_configured": false, 00:28:26.894 "data_offset": 0, 00:28:26.894 "data_size": 0 00:28:26.894 } 00:28:26.894 ] 00:28:26.894 }' 00:28:26.894 15:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.894 15:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.153 15:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:27.411 [2024-07-23 15:22:22.689009] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:27.411 BaseBdev3 00:28:27.411 15:22:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:27.411 15:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:28:27.411 15:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:27.411 15:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:27.411 15:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:27.411 15:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:27.411 15:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:27.669 15:22:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:27.927 [ 00:28:27.927 { 00:28:27.927 "name": "BaseBdev3", 00:28:27.927 "aliases": [ 00:28:27.927 "bed763c6-722a-4939-9bbb-c3f87aab8113" 00:28:27.927 ], 00:28:27.927 "product_name": "Malloc disk", 00:28:27.927 "block_size": 512, 00:28:27.927 "num_blocks": 65536, 00:28:27.927 "uuid": "bed763c6-722a-4939-9bbb-c3f87aab8113", 00:28:27.927 "assigned_rate_limits": { 00:28:27.927 "rw_ios_per_sec": 0, 00:28:27.927 "rw_mbytes_per_sec": 0, 00:28:27.927 "r_mbytes_per_sec": 0, 00:28:27.927 "w_mbytes_per_sec": 0 00:28:27.927 }, 00:28:27.927 "claimed": true, 00:28:27.927 "claim_type": "exclusive_write", 00:28:27.927 "zoned": false, 00:28:27.927 "supported_io_types": { 00:28:27.927 "read": true, 00:28:27.927 "write": true, 00:28:27.927 "unmap": true, 00:28:27.927 "flush": true, 00:28:27.927 "reset": true, 00:28:27.927 "nvme_admin": false, 00:28:27.927 "nvme_io": false, 00:28:27.927 "nvme_io_md": false, 00:28:27.927 "write_zeroes": true, 00:28:27.927 "zcopy": true, 00:28:27.927 "get_zone_info": false, 00:28:27.927 "zone_management": false, 00:28:27.927 "zone_append": false, 00:28:27.927 "compare": false, 00:28:27.927 "compare_and_write": false, 00:28:27.927 "abort": true, 00:28:27.927 "seek_hole": false, 00:28:27.927 "seek_data": false, 00:28:27.927 "copy": true, 00:28:27.927 "nvme_iov_md": false 00:28:27.927 }, 00:28:27.927 "memory_domains": [ 00:28:27.927 { 00:28:27.927 "dma_device_id": "system", 00:28:27.927 "dma_device_type": 1 00:28:27.927 }, 00:28:27.927 { 00:28:27.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.927 "dma_device_type": 2 00:28:27.927 } 00:28:27.927 ], 00:28:27.927 "driver_specific": {} 00:28:27.927 } 00:28:27.927 ] 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.927 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:28.185 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:28.185 "name": "Existed_Raid", 00:28:28.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.185 "strip_size_kb": 64, 00:28:28.185 "state": "configuring", 00:28:28.185 "raid_level": "raid5f", 00:28:28.185 "superblock": false, 00:28:28.185 "num_base_bdevs": 4, 00:28:28.185 "num_base_bdevs_discovered": 3, 00:28:28.185 "num_base_bdevs_operational": 4, 00:28:28.185 "base_bdevs_list": [ 00:28:28.185 { 00:28:28.185 "name": "BaseBdev1", 00:28:28.185 "uuid": "df9d06cf-1c43-41f9-99fb-f439088cde43", 00:28:28.185 "is_configured": true, 00:28:28.185 "data_offset": 0, 00:28:28.185 "data_size": 65536 00:28:28.185 }, 00:28:28.185 { 00:28:28.185 "name": "BaseBdev2", 00:28:28.185 "uuid": "02877be3-f520-4ca1-aa54-8e79a22eae5d", 00:28:28.185 "is_configured": true, 00:28:28.185 "data_offset": 0, 00:28:28.185 "data_size": 65536 00:28:28.185 }, 00:28:28.185 { 00:28:28.185 "name": "BaseBdev3", 00:28:28.185 "uuid": "bed763c6-722a-4939-9bbb-c3f87aab8113", 00:28:28.185 "is_configured": true, 00:28:28.185 "data_offset": 0, 00:28:28.185 "data_size": 65536 00:28:28.185 }, 00:28:28.185 { 00:28:28.185 "name": "BaseBdev4", 00:28:28.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.185 "is_configured": false, 00:28:28.185 "data_offset": 0, 00:28:28.185 "data_size": 0 00:28:28.185 } 00:28:28.185 ] 00:28:28.185 }' 00:28:28.185 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:28.185 15:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.443 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:28.443 [2024-07-23 15:22:23.840601] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:28.443 [2024-07-23 15:22:23.840933] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:28:28.443 [2024-07-23 15:22:23.841034] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:28.443 [2024-07-23 15:22:23.841207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:28:28.443 [2024-07-23 15:22:23.841948] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:28:28.443 [2024-07-23 15:22:23.842085] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:28:28.443 [2024-07-23 15:22:23.842401] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.443 BaseBdev4 00:28:28.443 15:22:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:28:28.443 15:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:28:28.443 15:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:28.443 15:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:28.443 15:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:28.443 15:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:28.443 15:22:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:28.701 15:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:28.959 [ 00:28:28.959 { 00:28:28.959 "name": "BaseBdev4", 00:28:28.959 "aliases": [ 00:28:28.959 "65fbcf5f-cca8-4184-b2a1-a15890407824" 00:28:28.959 ], 00:28:28.959 "product_name": "Malloc disk", 00:28:28.959 "block_size": 512, 00:28:28.959 "num_blocks": 65536, 00:28:28.959 "uuid": "65fbcf5f-cca8-4184-b2a1-a15890407824", 00:28:28.959 "assigned_rate_limits": { 00:28:28.959 "rw_ios_per_sec": 0, 00:28:28.959 "rw_mbytes_per_sec": 0, 00:28:28.959 "r_mbytes_per_sec": 0, 00:28:28.959 "w_mbytes_per_sec": 0 00:28:28.959 }, 00:28:28.959 "claimed": true, 00:28:28.959 "claim_type": "exclusive_write", 00:28:28.959 "zoned": false, 00:28:28.959 "supported_io_types": { 00:28:28.959 "read": true, 00:28:28.959 "write": true, 00:28:28.959 "unmap": true, 00:28:28.959 "flush": true, 00:28:28.959 "reset": true, 00:28:28.959 "nvme_admin": false, 00:28:28.959 "nvme_io": false, 00:28:28.959 "nvme_io_md": false, 00:28:28.959 "write_zeroes": true, 00:28:28.959 "zcopy": true, 00:28:28.959 "get_zone_info": false, 00:28:28.959 "zone_management": false, 00:28:28.959 "zone_append": false, 00:28:28.959 "compare": false, 00:28:28.959 "compare_and_write": false, 00:28:28.959 "abort": true, 00:28:28.959 "seek_hole": false, 00:28:28.959 "seek_data": false, 00:28:28.959 "copy": true, 00:28:28.959 "nvme_iov_md": false 00:28:28.959 }, 00:28:28.959 "memory_domains": [ 00:28:28.959 { 00:28:28.959 "dma_device_id": "system", 00:28:28.959 "dma_device_type": 1 00:28:28.959 }, 00:28:28.959 { 00:28:28.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:28.959 "dma_device_type": 2 00:28:28.959 } 00:28:28.959 ], 00:28:28.959 "driver_specific": {} 00:28:28.959 } 00:28:28.959 ] 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:28.959 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.217 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.217 "name": "Existed_Raid", 00:28:29.217 "uuid": "a359101f-0e4f-463c-bc13-99bf4de4561d", 00:28:29.217 "strip_size_kb": 64, 00:28:29.217 "state": "online", 00:28:29.217 "raid_level": "raid5f", 00:28:29.217 "superblock": false, 00:28:29.217 "num_base_bdevs": 4, 00:28:29.217 "num_base_bdevs_discovered": 4, 00:28:29.217 "num_base_bdevs_operational": 4, 00:28:29.217 "base_bdevs_list": [ 00:28:29.217 { 00:28:29.217 "name": "BaseBdev1", 00:28:29.217 "uuid": "df9d06cf-1c43-41f9-99fb-f439088cde43", 00:28:29.217 "is_configured": true, 00:28:29.217 "data_offset": 0, 00:28:29.217 "data_size": 65536 00:28:29.217 }, 00:28:29.217 { 00:28:29.217 "name": "BaseBdev2", 00:28:29.217 "uuid": "02877be3-f520-4ca1-aa54-8e79a22eae5d", 00:28:29.217 "is_configured": true, 00:28:29.217 "data_offset": 0, 00:28:29.217 "data_size": 65536 00:28:29.217 }, 00:28:29.217 { 00:28:29.217 "name": "BaseBdev3", 00:28:29.217 "uuid": "bed763c6-722a-4939-9bbb-c3f87aab8113", 00:28:29.217 "is_configured": true, 00:28:29.217 "data_offset": 0, 00:28:29.217 "data_size": 65536 00:28:29.217 }, 00:28:29.217 { 00:28:29.217 "name": "BaseBdev4", 00:28:29.217 "uuid": "65fbcf5f-cca8-4184-b2a1-a15890407824", 00:28:29.217 "is_configured": true, 00:28:29.217 "data_offset": 0, 00:28:29.217 "data_size": 65536 00:28:29.217 } 00:28:29.217 ] 00:28:29.217 }' 00:28:29.217 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.217 15:22:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.494 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:29.495 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:29.495 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:29.495 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:29.495 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:29.495 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:29.495 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:29.495 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:29.495 [2024-07-23 15:22:24.921171] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:29.753 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:29.753 "name": "Existed_Raid", 00:28:29.753 "aliases": [ 00:28:29.753 "a359101f-0e4f-463c-bc13-99bf4de4561d" 00:28:29.753 ], 00:28:29.753 "product_name": "Raid Volume", 00:28:29.753 "block_size": 512, 00:28:29.753 "num_blocks": 196608, 00:28:29.753 "uuid": "a359101f-0e4f-463c-bc13-99bf4de4561d", 00:28:29.753 "assigned_rate_limits": { 00:28:29.753 "rw_ios_per_sec": 0, 00:28:29.753 "rw_mbytes_per_sec": 0, 00:28:29.753 "r_mbytes_per_sec": 0, 00:28:29.753 "w_mbytes_per_sec": 0 00:28:29.753 }, 00:28:29.753 "claimed": false, 00:28:29.753 "zoned": false, 00:28:29.753 "supported_io_types": { 00:28:29.753 "read": true, 00:28:29.753 "write": true, 00:28:29.753 "unmap": false, 00:28:29.753 "flush": false, 00:28:29.753 "reset": true, 00:28:29.753 "nvme_admin": false, 00:28:29.753 "nvme_io": false, 00:28:29.753 "nvme_io_md": false, 00:28:29.753 "write_zeroes": true, 00:28:29.753 "zcopy": false, 00:28:29.753 "get_zone_info": false, 00:28:29.753 "zone_management": false, 00:28:29.753 "zone_append": false, 00:28:29.753 "compare": false, 00:28:29.753 "compare_and_write": false, 00:28:29.753 "abort": false, 00:28:29.753 "seek_hole": false, 00:28:29.753 "seek_data": false, 00:28:29.753 "copy": false, 00:28:29.753 "nvme_iov_md": false 00:28:29.753 }, 00:28:29.753 "driver_specific": { 00:28:29.753 "raid": { 00:28:29.753 "uuid": "a359101f-0e4f-463c-bc13-99bf4de4561d", 00:28:29.753 "strip_size_kb": 64, 00:28:29.753 "state": "online", 00:28:29.753 "raid_level": "raid5f", 00:28:29.753 "superblock": false, 00:28:29.753 "num_base_bdevs": 4, 00:28:29.753 "num_base_bdevs_discovered": 4, 00:28:29.753 "num_base_bdevs_operational": 4, 00:28:29.753 "base_bdevs_list": [ 00:28:29.753 { 00:28:29.753 "name": "BaseBdev1", 00:28:29.753 "uuid": "df9d06cf-1c43-41f9-99fb-f439088cde43", 00:28:29.753 "is_configured": true, 00:28:29.753 "data_offset": 0, 00:28:29.753 "data_size": 65536 00:28:29.753 }, 00:28:29.753 { 00:28:29.753 "name": "BaseBdev2", 00:28:29.753 "uuid": "02877be3-f520-4ca1-aa54-8e79a22eae5d", 00:28:29.753 "is_configured": true, 00:28:29.753 "data_offset": 0, 00:28:29.753 "data_size": 65536 00:28:29.753 }, 00:28:29.753 { 00:28:29.753 "name": "BaseBdev3", 00:28:29.753 "uuid": "bed763c6-722a-4939-9bbb-c3f87aab8113", 00:28:29.753 "is_configured": true, 00:28:29.753 "data_offset": 0, 00:28:29.753 "data_size": 65536 00:28:29.753 }, 00:28:29.753 { 00:28:29.753 "name": "BaseBdev4", 00:28:29.753 "uuid": "65fbcf5f-cca8-4184-b2a1-a15890407824", 00:28:29.753 "is_configured": true, 00:28:29.753 "data_offset": 0, 00:28:29.753 "data_size": 65536 00:28:29.753 } 00:28:29.753 ] 00:28:29.753 } 00:28:29.753 } 00:28:29.753 }' 00:28:29.753 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:29.753 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:29.753 BaseBdev2 00:28:29.753 BaseBdev3 00:28:29.753 BaseBdev4' 00:28:29.753 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:29.753 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:29.753 15:22:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:30.012 "name": "BaseBdev1", 00:28:30.012 "aliases": [ 00:28:30.012 "df9d06cf-1c43-41f9-99fb-f439088cde43" 00:28:30.012 ], 00:28:30.012 "product_name": "Malloc disk", 00:28:30.012 "block_size": 512, 00:28:30.012 "num_blocks": 65536, 00:28:30.012 "uuid": "df9d06cf-1c43-41f9-99fb-f439088cde43", 00:28:30.012 "assigned_rate_limits": { 00:28:30.012 "rw_ios_per_sec": 0, 00:28:30.012 "rw_mbytes_per_sec": 0, 00:28:30.012 "r_mbytes_per_sec": 0, 00:28:30.012 "w_mbytes_per_sec": 0 00:28:30.012 }, 00:28:30.012 "claimed": true, 00:28:30.012 "claim_type": "exclusive_write", 00:28:30.012 "zoned": false, 00:28:30.012 "supported_io_types": { 00:28:30.012 "read": true, 00:28:30.012 "write": true, 00:28:30.012 "unmap": true, 00:28:30.012 "flush": true, 00:28:30.012 "reset": true, 00:28:30.012 "nvme_admin": false, 00:28:30.012 "nvme_io": false, 00:28:30.012 "nvme_io_md": false, 00:28:30.012 "write_zeroes": true, 00:28:30.012 "zcopy": true, 00:28:30.012 "get_zone_info": false, 00:28:30.012 "zone_management": false, 00:28:30.012 "zone_append": false, 00:28:30.012 "compare": false, 00:28:30.012 "compare_and_write": false, 00:28:30.012 "abort": true, 00:28:30.012 "seek_hole": false, 00:28:30.012 "seek_data": false, 00:28:30.012 "copy": true, 00:28:30.012 "nvme_iov_md": false 00:28:30.012 }, 00:28:30.012 "memory_domains": [ 00:28:30.012 { 00:28:30.012 "dma_device_id": "system", 00:28:30.012 "dma_device_type": 1 00:28:30.012 }, 00:28:30.012 { 00:28:30.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.012 "dma_device_type": 2 00:28:30.012 } 00:28:30.012 ], 00:28:30.012 "driver_specific": {} 00:28:30.012 }' 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:30.012 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:30.271 "name": "BaseBdev2", 00:28:30.271 "aliases": [ 00:28:30.271 "02877be3-f520-4ca1-aa54-8e79a22eae5d" 00:28:30.271 ], 00:28:30.271 "product_name": "Malloc disk", 00:28:30.271 "block_size": 512, 00:28:30.271 "num_blocks": 65536, 00:28:30.271 "uuid": "02877be3-f520-4ca1-aa54-8e79a22eae5d", 00:28:30.271 "assigned_rate_limits": { 00:28:30.271 "rw_ios_per_sec": 0, 00:28:30.271 "rw_mbytes_per_sec": 0, 00:28:30.271 "r_mbytes_per_sec": 0, 00:28:30.271 "w_mbytes_per_sec": 0 00:28:30.271 }, 00:28:30.271 "claimed": true, 00:28:30.271 "claim_type": "exclusive_write", 00:28:30.271 "zoned": false, 00:28:30.271 "supported_io_types": { 00:28:30.271 "read": true, 00:28:30.271 "write": true, 00:28:30.271 "unmap": true, 00:28:30.271 "flush": true, 00:28:30.271 "reset": true, 00:28:30.271 "nvme_admin": false, 00:28:30.271 "nvme_io": false, 00:28:30.271 "nvme_io_md": false, 00:28:30.271 "write_zeroes": true, 00:28:30.271 "zcopy": true, 00:28:30.271 "get_zone_info": false, 00:28:30.271 "zone_management": false, 00:28:30.271 "zone_append": false, 00:28:30.271 "compare": false, 00:28:30.271 "compare_and_write": false, 00:28:30.271 "abort": true, 00:28:30.271 "seek_hole": false, 00:28:30.271 "seek_data": false, 00:28:30.271 "copy": true, 00:28:30.271 "nvme_iov_md": false 00:28:30.271 }, 00:28:30.271 "memory_domains": [ 00:28:30.271 { 00:28:30.271 "dma_device_id": "system", 00:28:30.271 "dma_device_type": 1 00:28:30.271 }, 00:28:30.271 { 00:28:30.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.271 "dma_device_type": 2 00:28:30.271 } 00:28:30.271 ], 00:28:30.271 "driver_specific": {} 00:28:30.271 }' 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:30.271 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:30.529 "name": "BaseBdev3", 00:28:30.529 "aliases": [ 00:28:30.529 "bed763c6-722a-4939-9bbb-c3f87aab8113" 00:28:30.529 ], 00:28:30.529 "product_name": "Malloc disk", 00:28:30.529 "block_size": 512, 00:28:30.529 "num_blocks": 65536, 00:28:30.529 "uuid": "bed763c6-722a-4939-9bbb-c3f87aab8113", 00:28:30.529 "assigned_rate_limits": { 00:28:30.529 "rw_ios_per_sec": 0, 00:28:30.529 "rw_mbytes_per_sec": 0, 00:28:30.529 "r_mbytes_per_sec": 0, 00:28:30.529 "w_mbytes_per_sec": 0 00:28:30.529 }, 00:28:30.529 "claimed": true, 00:28:30.529 "claim_type": "exclusive_write", 00:28:30.529 "zoned": false, 00:28:30.529 "supported_io_types": { 00:28:30.529 "read": true, 00:28:30.529 "write": true, 00:28:30.529 "unmap": true, 00:28:30.529 "flush": true, 00:28:30.529 "reset": true, 00:28:30.529 "nvme_admin": false, 00:28:30.529 "nvme_io": false, 00:28:30.529 "nvme_io_md": false, 00:28:30.529 "write_zeroes": true, 00:28:30.529 "zcopy": true, 00:28:30.529 "get_zone_info": false, 00:28:30.529 "zone_management": false, 00:28:30.529 "zone_append": false, 00:28:30.529 "compare": false, 00:28:30.529 "compare_and_write": false, 00:28:30.529 "abort": true, 00:28:30.529 "seek_hole": false, 00:28:30.529 "seek_data": false, 00:28:30.529 "copy": true, 00:28:30.529 "nvme_iov_md": false 00:28:30.529 }, 00:28:30.529 "memory_domains": [ 00:28:30.529 { 00:28:30.529 "dma_device_id": "system", 00:28:30.529 "dma_device_type": 1 00:28:30.529 }, 00:28:30.529 { 00:28:30.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.529 "dma_device_type": 2 00:28:30.529 } 00:28:30.529 ], 00:28:30.529 "driver_specific": {} 00:28:30.529 }' 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:30.529 15:22:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:30.787 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:30.787 "name": "BaseBdev4", 00:28:30.787 "aliases": [ 00:28:30.787 "65fbcf5f-cca8-4184-b2a1-a15890407824" 00:28:30.787 ], 00:28:30.787 "product_name": "Malloc disk", 00:28:30.787 "block_size": 512, 00:28:30.787 "num_blocks": 65536, 00:28:30.787 "uuid": "65fbcf5f-cca8-4184-b2a1-a15890407824", 00:28:30.787 "assigned_rate_limits": { 00:28:30.787 "rw_ios_per_sec": 0, 00:28:30.787 "rw_mbytes_per_sec": 0, 00:28:30.787 "r_mbytes_per_sec": 0, 00:28:30.787 "w_mbytes_per_sec": 0 00:28:30.787 }, 00:28:30.787 "claimed": true, 00:28:30.787 "claim_type": "exclusive_write", 00:28:30.787 "zoned": false, 00:28:30.787 "supported_io_types": { 00:28:30.787 "read": true, 00:28:30.787 "write": true, 00:28:30.787 "unmap": true, 00:28:30.787 "flush": true, 00:28:30.787 "reset": true, 00:28:30.787 "nvme_admin": false, 00:28:30.787 "nvme_io": false, 00:28:30.787 "nvme_io_md": false, 00:28:30.787 "write_zeroes": true, 00:28:30.787 "zcopy": true, 00:28:30.787 "get_zone_info": false, 00:28:30.787 "zone_management": false, 00:28:30.787 "zone_append": false, 00:28:30.787 "compare": false, 00:28:30.787 "compare_and_write": false, 00:28:30.787 "abort": true, 00:28:30.787 "seek_hole": false, 00:28:30.787 "seek_data": false, 00:28:30.787 "copy": true, 00:28:30.787 "nvme_iov_md": false 00:28:30.787 }, 00:28:30.787 "memory_domains": [ 00:28:30.787 { 00:28:30.787 "dma_device_id": "system", 00:28:30.787 "dma_device_type": 1 00:28:30.787 }, 00:28:30.787 { 00:28:30.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:30.787 "dma_device_type": 2 00:28:30.787 } 00:28:30.787 ], 00:28:30.787 "driver_specific": {} 00:28:30.787 }' 00:28:30.787 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:30.787 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:31.045 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:31.303 [2024-07-23 15:22:26.537416] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.303 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:31.562 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.562 "name": "Existed_Raid", 00:28:31.562 "uuid": "a359101f-0e4f-463c-bc13-99bf4de4561d", 00:28:31.562 "strip_size_kb": 64, 00:28:31.562 "state": "online", 00:28:31.562 "raid_level": "raid5f", 00:28:31.562 "superblock": false, 00:28:31.562 "num_base_bdevs": 4, 00:28:31.562 "num_base_bdevs_discovered": 3, 00:28:31.562 "num_base_bdevs_operational": 3, 00:28:31.562 "base_bdevs_list": [ 00:28:31.562 { 00:28:31.562 "name": null, 00:28:31.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.562 "is_configured": false, 00:28:31.562 "data_offset": 0, 00:28:31.562 "data_size": 65536 00:28:31.562 }, 00:28:31.562 { 00:28:31.562 "name": "BaseBdev2", 00:28:31.562 "uuid": "02877be3-f520-4ca1-aa54-8e79a22eae5d", 00:28:31.562 "is_configured": true, 00:28:31.562 "data_offset": 0, 00:28:31.562 "data_size": 65536 00:28:31.562 }, 00:28:31.562 { 00:28:31.562 "name": "BaseBdev3", 00:28:31.562 "uuid": "bed763c6-722a-4939-9bbb-c3f87aab8113", 00:28:31.562 "is_configured": true, 00:28:31.562 "data_offset": 0, 00:28:31.562 "data_size": 65536 00:28:31.562 }, 00:28:31.562 { 00:28:31.562 "name": "BaseBdev4", 00:28:31.562 "uuid": "65fbcf5f-cca8-4184-b2a1-a15890407824", 00:28:31.562 "is_configured": true, 00:28:31.562 "data_offset": 0, 00:28:31.562 "data_size": 65536 00:28:31.562 } 00:28:31.562 ] 00:28:31.562 }' 00:28:31.562 15:22:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.562 15:22:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.820 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:31.820 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:31.820 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.820 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:32.078 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:32.078 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:32.078 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:32.336 [2024-07-23 15:22:27.522043] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:32.336 [2024-07-23 15:22:27.522163] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:32.336 [2024-07-23 15:22:27.534340] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.336 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:32.336 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:32.336 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.336 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:32.619 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:32.619 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:32.619 15:22:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:32.619 [2024-07-23 15:22:27.978533] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:32.619 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:32.619 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:32.619 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.619 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:32.878 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:32.878 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:32.878 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:33.136 [2024-07-23 15:22:28.355173] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:33.136 [2024-07-23 15:22:28.355252] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:28:33.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:33.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:33.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.136 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:33.394 BaseBdev2 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:33.394 15:22:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:33.652 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:33.911 [ 00:28:33.911 { 00:28:33.911 "name": "BaseBdev2", 00:28:33.911 "aliases": [ 00:28:33.911 "3188bd68-b855-46f4-973f-75d83471d76a" 00:28:33.911 ], 00:28:33.911 "product_name": "Malloc disk", 00:28:33.911 "block_size": 512, 00:28:33.911 "num_blocks": 65536, 00:28:33.911 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:33.911 "assigned_rate_limits": { 00:28:33.911 "rw_ios_per_sec": 0, 00:28:33.911 "rw_mbytes_per_sec": 0, 00:28:33.911 "r_mbytes_per_sec": 0, 00:28:33.911 "w_mbytes_per_sec": 0 00:28:33.911 }, 00:28:33.911 "claimed": false, 00:28:33.911 "zoned": false, 00:28:33.911 "supported_io_types": { 00:28:33.911 "read": true, 00:28:33.911 "write": true, 00:28:33.911 "unmap": true, 00:28:33.911 "flush": true, 00:28:33.911 "reset": true, 00:28:33.911 "nvme_admin": false, 00:28:33.911 "nvme_io": false, 00:28:33.911 "nvme_io_md": false, 00:28:33.911 "write_zeroes": true, 00:28:33.911 "zcopy": true, 00:28:33.911 "get_zone_info": false, 00:28:33.911 "zone_management": false, 00:28:33.911 "zone_append": false, 00:28:33.911 "compare": false, 00:28:33.911 "compare_and_write": false, 00:28:33.911 "abort": true, 00:28:33.911 "seek_hole": false, 00:28:33.911 "seek_data": false, 00:28:33.911 "copy": true, 00:28:33.911 "nvme_iov_md": false 00:28:33.911 }, 00:28:33.911 "memory_domains": [ 00:28:33.911 { 00:28:33.911 "dma_device_id": "system", 00:28:33.911 "dma_device_type": 1 00:28:33.911 }, 00:28:33.911 { 00:28:33.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:33.911 "dma_device_type": 2 00:28:33.911 } 00:28:33.911 ], 00:28:33.911 "driver_specific": {} 00:28:33.911 } 00:28:33.911 ] 00:28:33.911 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:33.911 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:33.911 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:33.911 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:34.168 BaseBdev3 00:28:34.168 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:34.168 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:28:34.168 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:34.168 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:34.168 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:34.168 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:34.168 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:34.425 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:34.425 [ 00:28:34.425 { 00:28:34.425 "name": "BaseBdev3", 00:28:34.425 "aliases": [ 00:28:34.425 "dd229584-1f75-41ce-b0b3-1037cc36fcff" 00:28:34.425 ], 00:28:34.425 "product_name": "Malloc disk", 00:28:34.425 "block_size": 512, 00:28:34.425 "num_blocks": 65536, 00:28:34.426 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:34.426 "assigned_rate_limits": { 00:28:34.426 "rw_ios_per_sec": 0, 00:28:34.426 "rw_mbytes_per_sec": 0, 00:28:34.426 "r_mbytes_per_sec": 0, 00:28:34.426 "w_mbytes_per_sec": 0 00:28:34.426 }, 00:28:34.426 "claimed": false, 00:28:34.426 "zoned": false, 00:28:34.426 "supported_io_types": { 00:28:34.426 "read": true, 00:28:34.426 "write": true, 00:28:34.426 "unmap": true, 00:28:34.426 "flush": true, 00:28:34.426 "reset": true, 00:28:34.426 "nvme_admin": false, 00:28:34.426 "nvme_io": false, 00:28:34.426 "nvme_io_md": false, 00:28:34.426 "write_zeroes": true, 00:28:34.426 "zcopy": true, 00:28:34.426 "get_zone_info": false, 00:28:34.426 "zone_management": false, 00:28:34.426 "zone_append": false, 00:28:34.426 "compare": false, 00:28:34.426 "compare_and_write": false, 00:28:34.426 "abort": true, 00:28:34.426 "seek_hole": false, 00:28:34.426 "seek_data": false, 00:28:34.426 "copy": true, 00:28:34.426 "nvme_iov_md": false 00:28:34.426 }, 00:28:34.426 "memory_domains": [ 00:28:34.426 { 00:28:34.426 "dma_device_id": "system", 00:28:34.426 "dma_device_type": 1 00:28:34.426 }, 00:28:34.426 { 00:28:34.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:34.426 "dma_device_type": 2 00:28:34.426 } 00:28:34.426 ], 00:28:34.426 "driver_specific": {} 00:28:34.426 } 00:28:34.426 ] 00:28:34.426 15:22:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:34.426 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:34.426 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:34.426 15:22:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:34.683 BaseBdev4 00:28:34.683 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:28:34.683 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:28:34.683 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:34.683 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:34.683 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:34.683 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:34.683 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:34.940 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:35.198 [ 00:28:35.198 { 00:28:35.198 "name": "BaseBdev4", 00:28:35.198 "aliases": [ 00:28:35.198 "f9faa62d-7226-4066-962f-778b467aa818" 00:28:35.198 ], 00:28:35.198 "product_name": "Malloc disk", 00:28:35.198 "block_size": 512, 00:28:35.198 "num_blocks": 65536, 00:28:35.198 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:35.198 "assigned_rate_limits": { 00:28:35.198 "rw_ios_per_sec": 0, 00:28:35.198 "rw_mbytes_per_sec": 0, 00:28:35.198 "r_mbytes_per_sec": 0, 00:28:35.198 "w_mbytes_per_sec": 0 00:28:35.198 }, 00:28:35.198 "claimed": false, 00:28:35.198 "zoned": false, 00:28:35.198 "supported_io_types": { 00:28:35.198 "read": true, 00:28:35.198 "write": true, 00:28:35.198 "unmap": true, 00:28:35.198 "flush": true, 00:28:35.198 "reset": true, 00:28:35.198 "nvme_admin": false, 00:28:35.198 "nvme_io": false, 00:28:35.198 "nvme_io_md": false, 00:28:35.198 "write_zeroes": true, 00:28:35.198 "zcopy": true, 00:28:35.198 "get_zone_info": false, 00:28:35.198 "zone_management": false, 00:28:35.198 "zone_append": false, 00:28:35.198 "compare": false, 00:28:35.198 "compare_and_write": false, 00:28:35.198 "abort": true, 00:28:35.198 "seek_hole": false, 00:28:35.198 "seek_data": false, 00:28:35.198 "copy": true, 00:28:35.198 "nvme_iov_md": false 00:28:35.198 }, 00:28:35.198 "memory_domains": [ 00:28:35.198 { 00:28:35.198 "dma_device_id": "system", 00:28:35.198 "dma_device_type": 1 00:28:35.198 }, 00:28:35.198 { 00:28:35.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:35.198 "dma_device_type": 2 00:28:35.198 } 00:28:35.198 ], 00:28:35.198 "driver_specific": {} 00:28:35.198 } 00:28:35.198 ] 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:35.198 [2024-07-23 15:22:30.585024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:35.198 [2024-07-23 15:22:30.585087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:35.198 [2024-07-23 15:22:30.585123] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:35.198 [2024-07-23 15:22:30.587219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:35.198 [2024-07-23 15:22:30.587275] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:35.198 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.456 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.456 "name": "Existed_Raid", 00:28:35.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.456 "strip_size_kb": 64, 00:28:35.456 "state": "configuring", 00:28:35.456 "raid_level": "raid5f", 00:28:35.456 "superblock": false, 00:28:35.456 "num_base_bdevs": 4, 00:28:35.456 "num_base_bdevs_discovered": 3, 00:28:35.456 "num_base_bdevs_operational": 4, 00:28:35.456 "base_bdevs_list": [ 00:28:35.456 { 00:28:35.456 "name": "BaseBdev1", 00:28:35.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.456 "is_configured": false, 00:28:35.456 "data_offset": 0, 00:28:35.456 "data_size": 0 00:28:35.456 }, 00:28:35.456 { 00:28:35.456 "name": "BaseBdev2", 00:28:35.456 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:35.456 "is_configured": true, 00:28:35.456 "data_offset": 0, 00:28:35.456 "data_size": 65536 00:28:35.456 }, 00:28:35.456 { 00:28:35.456 "name": "BaseBdev3", 00:28:35.456 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:35.456 "is_configured": true, 00:28:35.456 "data_offset": 0, 00:28:35.456 "data_size": 65536 00:28:35.456 }, 00:28:35.456 { 00:28:35.456 "name": "BaseBdev4", 00:28:35.456 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:35.456 "is_configured": true, 00:28:35.456 "data_offset": 0, 00:28:35.456 "data_size": 65536 00:28:35.456 } 00:28:35.456 ] 00:28:35.456 }' 00:28:35.456 15:22:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.456 15:22:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.712 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:35.970 [2024-07-23 15:22:31.297180] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.970 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:36.228 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:36.228 "name": "Existed_Raid", 00:28:36.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.228 "strip_size_kb": 64, 00:28:36.228 "state": "configuring", 00:28:36.228 "raid_level": "raid5f", 00:28:36.228 "superblock": false, 00:28:36.228 "num_base_bdevs": 4, 00:28:36.228 "num_base_bdevs_discovered": 2, 00:28:36.228 "num_base_bdevs_operational": 4, 00:28:36.228 "base_bdevs_list": [ 00:28:36.228 { 00:28:36.228 "name": "BaseBdev1", 00:28:36.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.228 "is_configured": false, 00:28:36.228 "data_offset": 0, 00:28:36.228 "data_size": 0 00:28:36.228 }, 00:28:36.228 { 00:28:36.228 "name": null, 00:28:36.228 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:36.228 "is_configured": false, 00:28:36.228 "data_offset": 0, 00:28:36.228 "data_size": 65536 00:28:36.228 }, 00:28:36.228 { 00:28:36.228 "name": "BaseBdev3", 00:28:36.228 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:36.228 "is_configured": true, 00:28:36.228 "data_offset": 0, 00:28:36.228 "data_size": 65536 00:28:36.228 }, 00:28:36.228 { 00:28:36.228 "name": "BaseBdev4", 00:28:36.228 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:36.228 "is_configured": true, 00:28:36.228 "data_offset": 0, 00:28:36.228 "data_size": 65536 00:28:36.228 } 00:28:36.228 ] 00:28:36.228 }' 00:28:36.228 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:36.228 15:22:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.486 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.486 15:22:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:36.745 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:36.745 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:37.003 [2024-07-23 15:22:32.344610] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:37.003 BaseBdev1 00:28:37.003 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:37.003 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:28:37.003 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:37.003 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:37.003 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:37.003 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:37.003 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:37.262 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:37.262 [ 00:28:37.262 { 00:28:37.262 "name": "BaseBdev1", 00:28:37.262 "aliases": [ 00:28:37.262 "17c581ca-993c-4c08-bf88-d8843d435742" 00:28:37.262 ], 00:28:37.263 "product_name": "Malloc disk", 00:28:37.263 "block_size": 512, 00:28:37.263 "num_blocks": 65536, 00:28:37.263 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:37.263 "assigned_rate_limits": { 00:28:37.263 "rw_ios_per_sec": 0, 00:28:37.263 "rw_mbytes_per_sec": 0, 00:28:37.263 "r_mbytes_per_sec": 0, 00:28:37.263 "w_mbytes_per_sec": 0 00:28:37.263 }, 00:28:37.263 "claimed": true, 00:28:37.263 "claim_type": "exclusive_write", 00:28:37.263 "zoned": false, 00:28:37.263 "supported_io_types": { 00:28:37.263 "read": true, 00:28:37.263 "write": true, 00:28:37.263 "unmap": true, 00:28:37.263 "flush": true, 00:28:37.263 "reset": true, 00:28:37.263 "nvme_admin": false, 00:28:37.263 "nvme_io": false, 00:28:37.263 "nvme_io_md": false, 00:28:37.263 "write_zeroes": true, 00:28:37.263 "zcopy": true, 00:28:37.263 "get_zone_info": false, 00:28:37.263 "zone_management": false, 00:28:37.263 "zone_append": false, 00:28:37.263 "compare": false, 00:28:37.263 "compare_and_write": false, 00:28:37.263 "abort": true, 00:28:37.263 "seek_hole": false, 00:28:37.263 "seek_data": false, 00:28:37.263 "copy": true, 00:28:37.263 "nvme_iov_md": false 00:28:37.263 }, 00:28:37.263 "memory_domains": [ 00:28:37.263 { 00:28:37.263 "dma_device_id": "system", 00:28:37.263 "dma_device_type": 1 00:28:37.263 }, 00:28:37.263 { 00:28:37.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:37.263 "dma_device_type": 2 00:28:37.263 } 00:28:37.263 ], 00:28:37.263 "driver_specific": {} 00:28:37.263 } 00:28:37.263 ] 00:28:37.521 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:37.521 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:37.521 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:37.521 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:37.521 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.522 "name": "Existed_Raid", 00:28:37.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.522 "strip_size_kb": 64, 00:28:37.522 "state": "configuring", 00:28:37.522 "raid_level": "raid5f", 00:28:37.522 "superblock": false, 00:28:37.522 "num_base_bdevs": 4, 00:28:37.522 "num_base_bdevs_discovered": 3, 00:28:37.522 "num_base_bdevs_operational": 4, 00:28:37.522 "base_bdevs_list": [ 00:28:37.522 { 00:28:37.522 "name": "BaseBdev1", 00:28:37.522 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:37.522 "is_configured": true, 00:28:37.522 "data_offset": 0, 00:28:37.522 "data_size": 65536 00:28:37.522 }, 00:28:37.522 { 00:28:37.522 "name": null, 00:28:37.522 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:37.522 "is_configured": false, 00:28:37.522 "data_offset": 0, 00:28:37.522 "data_size": 65536 00:28:37.522 }, 00:28:37.522 { 00:28:37.522 "name": "BaseBdev3", 00:28:37.522 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:37.522 "is_configured": true, 00:28:37.522 "data_offset": 0, 00:28:37.522 "data_size": 65536 00:28:37.522 }, 00:28:37.522 { 00:28:37.522 "name": "BaseBdev4", 00:28:37.522 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:37.522 "is_configured": true, 00:28:37.522 "data_offset": 0, 00:28:37.522 "data_size": 65536 00:28:37.522 } 00:28:37.522 ] 00:28:37.522 }' 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.522 15:22:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.780 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.780 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:38.039 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:38.039 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:38.296 [2024-07-23 15:22:33.605015] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.296 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:38.554 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.554 "name": "Existed_Raid", 00:28:38.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.554 "strip_size_kb": 64, 00:28:38.554 "state": "configuring", 00:28:38.554 "raid_level": "raid5f", 00:28:38.554 "superblock": false, 00:28:38.554 "num_base_bdevs": 4, 00:28:38.554 "num_base_bdevs_discovered": 2, 00:28:38.554 "num_base_bdevs_operational": 4, 00:28:38.554 "base_bdevs_list": [ 00:28:38.554 { 00:28:38.554 "name": "BaseBdev1", 00:28:38.554 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:38.555 "is_configured": true, 00:28:38.555 "data_offset": 0, 00:28:38.555 "data_size": 65536 00:28:38.555 }, 00:28:38.555 { 00:28:38.555 "name": null, 00:28:38.555 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:38.555 "is_configured": false, 00:28:38.555 "data_offset": 0, 00:28:38.555 "data_size": 65536 00:28:38.555 }, 00:28:38.555 { 00:28:38.555 "name": null, 00:28:38.555 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:38.555 "is_configured": false, 00:28:38.555 "data_offset": 0, 00:28:38.555 "data_size": 65536 00:28:38.555 }, 00:28:38.555 { 00:28:38.555 "name": "BaseBdev4", 00:28:38.555 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:38.555 "is_configured": true, 00:28:38.555 "data_offset": 0, 00:28:38.555 "data_size": 65536 00:28:38.555 } 00:28:38.555 ] 00:28:38.555 }' 00:28:38.555 15:22:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.555 15:22:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.813 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:38.813 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.085 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:39.085 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:39.344 [2024-07-23 15:22:34.597304] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.344 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:39.603 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.603 "name": "Existed_Raid", 00:28:39.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.603 "strip_size_kb": 64, 00:28:39.603 "state": "configuring", 00:28:39.603 "raid_level": "raid5f", 00:28:39.603 "superblock": false, 00:28:39.603 "num_base_bdevs": 4, 00:28:39.603 "num_base_bdevs_discovered": 3, 00:28:39.603 "num_base_bdevs_operational": 4, 00:28:39.603 "base_bdevs_list": [ 00:28:39.603 { 00:28:39.603 "name": "BaseBdev1", 00:28:39.603 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:39.603 "is_configured": true, 00:28:39.603 "data_offset": 0, 00:28:39.603 "data_size": 65536 00:28:39.603 }, 00:28:39.603 { 00:28:39.603 "name": null, 00:28:39.603 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:39.603 "is_configured": false, 00:28:39.603 "data_offset": 0, 00:28:39.603 "data_size": 65536 00:28:39.603 }, 00:28:39.603 { 00:28:39.603 "name": "BaseBdev3", 00:28:39.603 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:39.603 "is_configured": true, 00:28:39.603 "data_offset": 0, 00:28:39.603 "data_size": 65536 00:28:39.603 }, 00:28:39.603 { 00:28:39.603 "name": "BaseBdev4", 00:28:39.603 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:39.603 "is_configured": true, 00:28:39.603 "data_offset": 0, 00:28:39.603 "data_size": 65536 00:28:39.603 } 00:28:39.603 ] 00:28:39.603 }' 00:28:39.603 15:22:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.603 15:22:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.863 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.863 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:40.122 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:40.122 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:40.381 [2024-07-23 15:22:35.633637] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.381 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:40.640 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:40.640 "name": "Existed_Raid", 00:28:40.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.640 "strip_size_kb": 64, 00:28:40.640 "state": "configuring", 00:28:40.640 "raid_level": "raid5f", 00:28:40.640 "superblock": false, 00:28:40.640 "num_base_bdevs": 4, 00:28:40.640 "num_base_bdevs_discovered": 2, 00:28:40.640 "num_base_bdevs_operational": 4, 00:28:40.640 "base_bdevs_list": [ 00:28:40.640 { 00:28:40.640 "name": null, 00:28:40.640 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:40.640 "is_configured": false, 00:28:40.640 "data_offset": 0, 00:28:40.640 "data_size": 65536 00:28:40.640 }, 00:28:40.640 { 00:28:40.640 "name": null, 00:28:40.640 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:40.640 "is_configured": false, 00:28:40.640 "data_offset": 0, 00:28:40.640 "data_size": 65536 00:28:40.640 }, 00:28:40.640 { 00:28:40.640 "name": "BaseBdev3", 00:28:40.640 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:40.640 "is_configured": true, 00:28:40.640 "data_offset": 0, 00:28:40.640 "data_size": 65536 00:28:40.640 }, 00:28:40.640 { 00:28:40.640 "name": "BaseBdev4", 00:28:40.640 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:40.640 "is_configured": true, 00:28:40.640 "data_offset": 0, 00:28:40.640 "data_size": 65536 00:28:40.640 } 00:28:40.640 ] 00:28:40.640 }' 00:28:40.640 15:22:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:40.640 15:22:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.900 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:40.900 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.159 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:41.159 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:41.419 [2024-07-23 15:22:36.623106] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:41.419 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.711 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:41.711 "name": "Existed_Raid", 00:28:41.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.711 "strip_size_kb": 64, 00:28:41.711 "state": "configuring", 00:28:41.711 "raid_level": "raid5f", 00:28:41.711 "superblock": false, 00:28:41.711 "num_base_bdevs": 4, 00:28:41.711 "num_base_bdevs_discovered": 3, 00:28:41.711 "num_base_bdevs_operational": 4, 00:28:41.711 "base_bdevs_list": [ 00:28:41.711 { 00:28:41.711 "name": null, 00:28:41.711 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:41.711 "is_configured": false, 00:28:41.711 "data_offset": 0, 00:28:41.711 "data_size": 65536 00:28:41.711 }, 00:28:41.711 { 00:28:41.711 "name": "BaseBdev2", 00:28:41.711 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:41.711 "is_configured": true, 00:28:41.711 "data_offset": 0, 00:28:41.711 "data_size": 65536 00:28:41.711 }, 00:28:41.711 { 00:28:41.711 "name": "BaseBdev3", 00:28:41.711 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:41.711 "is_configured": true, 00:28:41.711 "data_offset": 0, 00:28:41.711 "data_size": 65536 00:28:41.711 }, 00:28:41.711 { 00:28:41.711 "name": "BaseBdev4", 00:28:41.711 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:41.711 "is_configured": true, 00:28:41.711 "data_offset": 0, 00:28:41.711 "data_size": 65536 00:28:41.711 } 00:28:41.711 ] 00:28:41.711 }' 00:28:41.711 15:22:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:41.711 15:22:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.970 15:22:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.970 15:22:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:42.229 15:22:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:42.229 15:22:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:42.229 15:22:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.490 15:22:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 17c581ca-993c-4c08-bf88-d8843d435742 00:28:42.748 [2024-07-23 15:22:37.952915] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:42.749 [2024-07-23 15:22:37.952982] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:28:42.749 [2024-07-23 15:22:37.952992] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:42.749 [2024-07-23 15:22:37.953096] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:28:42.749 [2024-07-23 15:22:37.953861] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:28:42.749 [2024-07-23 15:22:37.953889] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008180 00:28:42.749 [2024-07-23 15:22:37.954091] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.749 NewBaseBdev 00:28:42.749 15:22:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:42.749 15:22:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:28:42.749 15:22:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:42.749 15:22:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:42.749 15:22:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:42.749 15:22:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:42.749 15:22:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:43.007 [ 00:28:43.007 { 00:28:43.007 "name": "NewBaseBdev", 00:28:43.007 "aliases": [ 00:28:43.007 "17c581ca-993c-4c08-bf88-d8843d435742" 00:28:43.007 ], 00:28:43.007 "product_name": "Malloc disk", 00:28:43.007 "block_size": 512, 00:28:43.007 "num_blocks": 65536, 00:28:43.007 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:43.007 "assigned_rate_limits": { 00:28:43.007 "rw_ios_per_sec": 0, 00:28:43.007 "rw_mbytes_per_sec": 0, 00:28:43.007 "r_mbytes_per_sec": 0, 00:28:43.007 "w_mbytes_per_sec": 0 00:28:43.007 }, 00:28:43.007 "claimed": true, 00:28:43.007 "claim_type": "exclusive_write", 00:28:43.007 "zoned": false, 00:28:43.007 "supported_io_types": { 00:28:43.007 "read": true, 00:28:43.007 "write": true, 00:28:43.007 "unmap": true, 00:28:43.007 "flush": true, 00:28:43.007 "reset": true, 00:28:43.007 "nvme_admin": false, 00:28:43.007 "nvme_io": false, 00:28:43.007 "nvme_io_md": false, 00:28:43.007 "write_zeroes": true, 00:28:43.007 "zcopy": true, 00:28:43.007 "get_zone_info": false, 00:28:43.007 "zone_management": false, 00:28:43.007 "zone_append": false, 00:28:43.007 "compare": false, 00:28:43.007 "compare_and_write": false, 00:28:43.007 "abort": true, 00:28:43.007 "seek_hole": false, 00:28:43.007 "seek_data": false, 00:28:43.007 "copy": true, 00:28:43.007 "nvme_iov_md": false 00:28:43.007 }, 00:28:43.007 "memory_domains": [ 00:28:43.007 { 00:28:43.007 "dma_device_id": "system", 00:28:43.007 "dma_device_type": 1 00:28:43.007 }, 00:28:43.007 { 00:28:43.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:43.007 "dma_device_type": 2 00:28:43.007 } 00:28:43.007 ], 00:28:43.007 "driver_specific": {} 00:28:43.007 } 00:28:43.007 ] 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:43.007 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.266 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:43.266 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:43.266 "name": "Existed_Raid", 00:28:43.266 "uuid": "51e3c37b-7bdc-44d9-8786-db06d1102c06", 00:28:43.266 "strip_size_kb": 64, 00:28:43.266 "state": "online", 00:28:43.266 "raid_level": "raid5f", 00:28:43.266 "superblock": false, 00:28:43.266 "num_base_bdevs": 4, 00:28:43.266 "num_base_bdevs_discovered": 4, 00:28:43.266 "num_base_bdevs_operational": 4, 00:28:43.266 "base_bdevs_list": [ 00:28:43.266 { 00:28:43.266 "name": "NewBaseBdev", 00:28:43.266 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:43.266 "is_configured": true, 00:28:43.266 "data_offset": 0, 00:28:43.266 "data_size": 65536 00:28:43.266 }, 00:28:43.266 { 00:28:43.266 "name": "BaseBdev2", 00:28:43.266 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:43.266 "is_configured": true, 00:28:43.266 "data_offset": 0, 00:28:43.266 "data_size": 65536 00:28:43.266 }, 00:28:43.266 { 00:28:43.266 "name": "BaseBdev3", 00:28:43.266 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:43.266 "is_configured": true, 00:28:43.266 "data_offset": 0, 00:28:43.266 "data_size": 65536 00:28:43.266 }, 00:28:43.266 { 00:28:43.266 "name": "BaseBdev4", 00:28:43.266 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:43.266 "is_configured": true, 00:28:43.266 "data_offset": 0, 00:28:43.266 "data_size": 65536 00:28:43.266 } 00:28:43.266 ] 00:28:43.266 }' 00:28:43.266 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:43.266 15:22:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:43.524 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:43.524 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:43.524 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:43.524 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:43.524 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:43.524 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:43.524 15:22:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:43.783 [2024-07-23 15:22:39.061496] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:43.783 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:43.783 "name": "Existed_Raid", 00:28:43.783 "aliases": [ 00:28:43.783 "51e3c37b-7bdc-44d9-8786-db06d1102c06" 00:28:43.783 ], 00:28:43.783 "product_name": "Raid Volume", 00:28:43.783 "block_size": 512, 00:28:43.783 "num_blocks": 196608, 00:28:43.783 "uuid": "51e3c37b-7bdc-44d9-8786-db06d1102c06", 00:28:43.783 "assigned_rate_limits": { 00:28:43.783 "rw_ios_per_sec": 0, 00:28:43.783 "rw_mbytes_per_sec": 0, 00:28:43.783 "r_mbytes_per_sec": 0, 00:28:43.783 "w_mbytes_per_sec": 0 00:28:43.783 }, 00:28:43.783 "claimed": false, 00:28:43.783 "zoned": false, 00:28:43.783 "supported_io_types": { 00:28:43.783 "read": true, 00:28:43.783 "write": true, 00:28:43.783 "unmap": false, 00:28:43.783 "flush": false, 00:28:43.783 "reset": true, 00:28:43.783 "nvme_admin": false, 00:28:43.783 "nvme_io": false, 00:28:43.783 "nvme_io_md": false, 00:28:43.783 "write_zeroes": true, 00:28:43.783 "zcopy": false, 00:28:43.783 "get_zone_info": false, 00:28:43.783 "zone_management": false, 00:28:43.783 "zone_append": false, 00:28:43.783 "compare": false, 00:28:43.783 "compare_and_write": false, 00:28:43.783 "abort": false, 00:28:43.783 "seek_hole": false, 00:28:43.783 "seek_data": false, 00:28:43.783 "copy": false, 00:28:43.783 "nvme_iov_md": false 00:28:43.783 }, 00:28:43.783 "driver_specific": { 00:28:43.783 "raid": { 00:28:43.783 "uuid": "51e3c37b-7bdc-44d9-8786-db06d1102c06", 00:28:43.783 "strip_size_kb": 64, 00:28:43.783 "state": "online", 00:28:43.783 "raid_level": "raid5f", 00:28:43.783 "superblock": false, 00:28:43.783 "num_base_bdevs": 4, 00:28:43.783 "num_base_bdevs_discovered": 4, 00:28:43.783 "num_base_bdevs_operational": 4, 00:28:43.783 "base_bdevs_list": [ 00:28:43.783 { 00:28:43.783 "name": "NewBaseBdev", 00:28:43.783 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:43.783 "is_configured": true, 00:28:43.783 "data_offset": 0, 00:28:43.783 "data_size": 65536 00:28:43.783 }, 00:28:43.783 { 00:28:43.783 "name": "BaseBdev2", 00:28:43.783 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:43.783 "is_configured": true, 00:28:43.783 "data_offset": 0, 00:28:43.783 "data_size": 65536 00:28:43.783 }, 00:28:43.783 { 00:28:43.783 "name": "BaseBdev3", 00:28:43.783 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:43.783 "is_configured": true, 00:28:43.783 "data_offset": 0, 00:28:43.783 "data_size": 65536 00:28:43.783 }, 00:28:43.783 { 00:28:43.783 "name": "BaseBdev4", 00:28:43.783 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:43.783 "is_configured": true, 00:28:43.783 "data_offset": 0, 00:28:43.783 "data_size": 65536 00:28:43.783 } 00:28:43.783 ] 00:28:43.783 } 00:28:43.783 } 00:28:43.783 }' 00:28:43.783 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:43.783 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:43.783 BaseBdev2 00:28:43.783 BaseBdev3 00:28:43.783 BaseBdev4' 00:28:43.783 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:43.783 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:43.783 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:44.041 "name": "NewBaseBdev", 00:28:44.041 "aliases": [ 00:28:44.041 "17c581ca-993c-4c08-bf88-d8843d435742" 00:28:44.041 ], 00:28:44.041 "product_name": "Malloc disk", 00:28:44.041 "block_size": 512, 00:28:44.041 "num_blocks": 65536, 00:28:44.041 "uuid": "17c581ca-993c-4c08-bf88-d8843d435742", 00:28:44.041 "assigned_rate_limits": { 00:28:44.041 "rw_ios_per_sec": 0, 00:28:44.041 "rw_mbytes_per_sec": 0, 00:28:44.041 "r_mbytes_per_sec": 0, 00:28:44.041 "w_mbytes_per_sec": 0 00:28:44.041 }, 00:28:44.041 "claimed": true, 00:28:44.041 "claim_type": "exclusive_write", 00:28:44.041 "zoned": false, 00:28:44.041 "supported_io_types": { 00:28:44.041 "read": true, 00:28:44.041 "write": true, 00:28:44.041 "unmap": true, 00:28:44.041 "flush": true, 00:28:44.041 "reset": true, 00:28:44.041 "nvme_admin": false, 00:28:44.041 "nvme_io": false, 00:28:44.041 "nvme_io_md": false, 00:28:44.041 "write_zeroes": true, 00:28:44.041 "zcopy": true, 00:28:44.041 "get_zone_info": false, 00:28:44.041 "zone_management": false, 00:28:44.041 "zone_append": false, 00:28:44.041 "compare": false, 00:28:44.041 "compare_and_write": false, 00:28:44.041 "abort": true, 00:28:44.041 "seek_hole": false, 00:28:44.041 "seek_data": false, 00:28:44.041 "copy": true, 00:28:44.041 "nvme_iov_md": false 00:28:44.041 }, 00:28:44.041 "memory_domains": [ 00:28:44.041 { 00:28:44.041 "dma_device_id": "system", 00:28:44.041 "dma_device_type": 1 00:28:44.041 }, 00:28:44.041 { 00:28:44.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:44.041 "dma_device_type": 2 00:28:44.041 } 00:28:44.041 ], 00:28:44.041 "driver_specific": {} 00:28:44.041 }' 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:44.041 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:44.299 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:44.299 "name": "BaseBdev2", 00:28:44.299 "aliases": [ 00:28:44.299 "3188bd68-b855-46f4-973f-75d83471d76a" 00:28:44.299 ], 00:28:44.299 "product_name": "Malloc disk", 00:28:44.299 "block_size": 512, 00:28:44.299 "num_blocks": 65536, 00:28:44.299 "uuid": "3188bd68-b855-46f4-973f-75d83471d76a", 00:28:44.299 "assigned_rate_limits": { 00:28:44.299 "rw_ios_per_sec": 0, 00:28:44.299 "rw_mbytes_per_sec": 0, 00:28:44.299 "r_mbytes_per_sec": 0, 00:28:44.299 "w_mbytes_per_sec": 0 00:28:44.299 }, 00:28:44.299 "claimed": true, 00:28:44.299 "claim_type": "exclusive_write", 00:28:44.299 "zoned": false, 00:28:44.299 "supported_io_types": { 00:28:44.299 "read": true, 00:28:44.299 "write": true, 00:28:44.299 "unmap": true, 00:28:44.299 "flush": true, 00:28:44.299 "reset": true, 00:28:44.299 "nvme_admin": false, 00:28:44.299 "nvme_io": false, 00:28:44.299 "nvme_io_md": false, 00:28:44.299 "write_zeroes": true, 00:28:44.299 "zcopy": true, 00:28:44.299 "get_zone_info": false, 00:28:44.299 "zone_management": false, 00:28:44.299 "zone_append": false, 00:28:44.299 "compare": false, 00:28:44.299 "compare_and_write": false, 00:28:44.299 "abort": true, 00:28:44.299 "seek_hole": false, 00:28:44.299 "seek_data": false, 00:28:44.299 "copy": true, 00:28:44.299 "nvme_iov_md": false 00:28:44.299 }, 00:28:44.299 "memory_domains": [ 00:28:44.299 { 00:28:44.299 "dma_device_id": "system", 00:28:44.299 "dma_device_type": 1 00:28:44.299 }, 00:28:44.299 { 00:28:44.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:44.299 "dma_device_type": 2 00:28:44.299 } 00:28:44.299 ], 00:28:44.299 "driver_specific": {} 00:28:44.299 }' 00:28:44.299 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:44.557 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:44.815 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:44.815 "name": "BaseBdev3", 00:28:44.815 "aliases": [ 00:28:44.815 "dd229584-1f75-41ce-b0b3-1037cc36fcff" 00:28:44.815 ], 00:28:44.815 "product_name": "Malloc disk", 00:28:44.815 "block_size": 512, 00:28:44.815 "num_blocks": 65536, 00:28:44.815 "uuid": "dd229584-1f75-41ce-b0b3-1037cc36fcff", 00:28:44.815 "assigned_rate_limits": { 00:28:44.815 "rw_ios_per_sec": 0, 00:28:44.815 "rw_mbytes_per_sec": 0, 00:28:44.815 "r_mbytes_per_sec": 0, 00:28:44.815 "w_mbytes_per_sec": 0 00:28:44.815 }, 00:28:44.815 "claimed": true, 00:28:44.815 "claim_type": "exclusive_write", 00:28:44.815 "zoned": false, 00:28:44.815 "supported_io_types": { 00:28:44.815 "read": true, 00:28:44.815 "write": true, 00:28:44.815 "unmap": true, 00:28:44.815 "flush": true, 00:28:44.815 "reset": true, 00:28:44.815 "nvme_admin": false, 00:28:44.815 "nvme_io": false, 00:28:44.815 "nvme_io_md": false, 00:28:44.815 "write_zeroes": true, 00:28:44.815 "zcopy": true, 00:28:44.815 "get_zone_info": false, 00:28:44.815 "zone_management": false, 00:28:44.815 "zone_append": false, 00:28:44.815 "compare": false, 00:28:44.815 "compare_and_write": false, 00:28:44.815 "abort": true, 00:28:44.815 "seek_hole": false, 00:28:44.815 "seek_data": false, 00:28:44.815 "copy": true, 00:28:44.815 "nvme_iov_md": false 00:28:44.815 }, 00:28:44.815 "memory_domains": [ 00:28:44.815 { 00:28:44.815 "dma_device_id": "system", 00:28:44.815 "dma_device_type": 1 00:28:44.815 }, 00:28:44.815 { 00:28:44.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:44.815 "dma_device_type": 2 00:28:44.815 } 00:28:44.815 ], 00:28:44.815 "driver_specific": {} 00:28:44.815 }' 00:28:44.816 15:22:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:44.816 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:45.074 "name": "BaseBdev4", 00:28:45.074 "aliases": [ 00:28:45.074 "f9faa62d-7226-4066-962f-778b467aa818" 00:28:45.074 ], 00:28:45.074 "product_name": "Malloc disk", 00:28:45.074 "block_size": 512, 00:28:45.074 "num_blocks": 65536, 00:28:45.074 "uuid": "f9faa62d-7226-4066-962f-778b467aa818", 00:28:45.074 "assigned_rate_limits": { 00:28:45.074 "rw_ios_per_sec": 0, 00:28:45.074 "rw_mbytes_per_sec": 0, 00:28:45.074 "r_mbytes_per_sec": 0, 00:28:45.074 "w_mbytes_per_sec": 0 00:28:45.074 }, 00:28:45.074 "claimed": true, 00:28:45.074 "claim_type": "exclusive_write", 00:28:45.074 "zoned": false, 00:28:45.074 "supported_io_types": { 00:28:45.074 "read": true, 00:28:45.074 "write": true, 00:28:45.074 "unmap": true, 00:28:45.074 "flush": true, 00:28:45.074 "reset": true, 00:28:45.074 "nvme_admin": false, 00:28:45.074 "nvme_io": false, 00:28:45.074 "nvme_io_md": false, 00:28:45.074 "write_zeroes": true, 00:28:45.074 "zcopy": true, 00:28:45.074 "get_zone_info": false, 00:28:45.074 "zone_management": false, 00:28:45.074 "zone_append": false, 00:28:45.074 "compare": false, 00:28:45.074 "compare_and_write": false, 00:28:45.074 "abort": true, 00:28:45.074 "seek_hole": false, 00:28:45.074 "seek_data": false, 00:28:45.074 "copy": true, 00:28:45.074 "nvme_iov_md": false 00:28:45.074 }, 00:28:45.074 "memory_domains": [ 00:28:45.074 { 00:28:45.074 "dma_device_id": "system", 00:28:45.074 "dma_device_type": 1 00:28:45.074 }, 00:28:45.074 { 00:28:45.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.074 "dma_device_type": 2 00:28:45.074 } 00:28:45.074 ], 00:28:45.074 "driver_specific": {} 00:28:45.074 }' 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:45.074 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:45.332 [2024-07-23 15:22:40.633642] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:45.332 [2024-07-23 15:22:40.633687] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:45.332 [2024-07-23 15:22:40.633791] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:45.333 [2024-07-23 15:22:40.634091] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:45.333 [2024-07-23 15:22:40.634115] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name Existed_Raid, state offline 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 116150 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 116150 ']' 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 116150 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116150 00:28:45.333 killing process with pid 116150 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116150' 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 116150 00:28:45.333 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 116150 00:28:45.333 [2024-07-23 15:22:40.692431] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:45.333 [2024-07-23 15:22:40.740307] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:45.590 ************************************ 00:28:45.591 END TEST raid5f_state_function_test 00:28:45.591 ************************************ 00:28:45.591 15:22:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:28:45.591 00:28:45.591 real 0m23.930s 00:28:45.591 user 0m41.729s 00:28:45.591 sys 0m5.309s 00:28:45.591 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:45.591 15:22:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.848 15:22:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:45.848 15:22:41 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:28:45.848 15:22:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:28:45.848 15:22:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.848 15:22:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:45.848 ************************************ 00:28:45.848 START TEST raid5f_state_function_test_sb 00:28:45.848 ************************************ 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 true 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:45.848 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=117106 00:28:45.849 Process raid pid: 117106 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 117106' 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 117106 /var/tmp/spdk-raid.sock 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 117106 ']' 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:45.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:45.849 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:45.849 [2024-07-23 15:22:41.121254] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:28:45.849 [2024-07-23 15:22:41.121397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.849 [2024-07-23 15:22:41.262443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.106 [2024-07-23 15:22:41.337908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.106 [2024-07-23 15:22:41.419623] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:46.674 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:46.674 15:22:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:28:46.674 15:22:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:46.932 [2024-07-23 15:22:42.123622] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:46.932 [2024-07-23 15:22:42.123706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:46.932 [2024-07-23 15:22:42.123727] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:46.932 [2024-07-23 15:22:42.123745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:46.932 [2024-07-23 15:22:42.123758] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:46.932 [2024-07-23 15:22:42.123773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:46.932 [2024-07-23 15:22:42.123781] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:46.932 [2024-07-23 15:22:42.123819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:46.932 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:46.932 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:46.932 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:46.932 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:46.933 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:46.933 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:46.933 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:46.933 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:46.933 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:46.933 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:46.933 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:46.933 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.191 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.191 "name": "Existed_Raid", 00:28:47.191 "uuid": "56bd26d8-19fe-4191-aad2-72bdf921dcfe", 00:28:47.191 "strip_size_kb": 64, 00:28:47.191 "state": "configuring", 00:28:47.191 "raid_level": "raid5f", 00:28:47.191 "superblock": true, 00:28:47.191 "num_base_bdevs": 4, 00:28:47.191 "num_base_bdevs_discovered": 0, 00:28:47.191 "num_base_bdevs_operational": 4, 00:28:47.191 "base_bdevs_list": [ 00:28:47.191 { 00:28:47.191 "name": "BaseBdev1", 00:28:47.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.191 "is_configured": false, 00:28:47.191 "data_offset": 0, 00:28:47.191 "data_size": 0 00:28:47.191 }, 00:28:47.191 { 00:28:47.191 "name": "BaseBdev2", 00:28:47.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.191 "is_configured": false, 00:28:47.191 "data_offset": 0, 00:28:47.191 "data_size": 0 00:28:47.191 }, 00:28:47.191 { 00:28:47.191 "name": "BaseBdev3", 00:28:47.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.191 "is_configured": false, 00:28:47.191 "data_offset": 0, 00:28:47.191 "data_size": 0 00:28:47.191 }, 00:28:47.191 { 00:28:47.191 "name": "BaseBdev4", 00:28:47.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.192 "is_configured": false, 00:28:47.192 "data_offset": 0, 00:28:47.192 "data_size": 0 00:28:47.192 } 00:28:47.192 ] 00:28:47.192 }' 00:28:47.192 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.192 15:22:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:47.450 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:47.708 [2024-07-23 15:22:42.955628] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:47.708 [2024-07-23 15:22:42.955694] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:28:47.708 15:22:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:47.967 [2024-07-23 15:22:43.143721] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:47.967 [2024-07-23 15:22:43.143806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:47.967 [2024-07-23 15:22:43.143818] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:47.967 [2024-07-23 15:22:43.143832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:47.967 [2024-07-23 15:22:43.143840] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:47.967 [2024-07-23 15:22:43.143853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:47.967 [2024-07-23 15:22:43.143861] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:47.967 [2024-07-23 15:22:43.143875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:47.967 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:47.967 [2024-07-23 15:22:43.331592] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:47.967 BaseBdev1 00:28:47.967 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:47.967 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:28:47.967 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:47.967 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:47.967 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:47.967 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:47.967 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:48.225 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:48.484 [ 00:28:48.484 { 00:28:48.484 "name": "BaseBdev1", 00:28:48.484 "aliases": [ 00:28:48.484 "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67" 00:28:48.484 ], 00:28:48.484 "product_name": "Malloc disk", 00:28:48.484 "block_size": 512, 00:28:48.484 "num_blocks": 65536, 00:28:48.484 "uuid": "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67", 00:28:48.484 "assigned_rate_limits": { 00:28:48.484 "rw_ios_per_sec": 0, 00:28:48.484 "rw_mbytes_per_sec": 0, 00:28:48.484 "r_mbytes_per_sec": 0, 00:28:48.484 "w_mbytes_per_sec": 0 00:28:48.484 }, 00:28:48.484 "claimed": true, 00:28:48.484 "claim_type": "exclusive_write", 00:28:48.484 "zoned": false, 00:28:48.484 "supported_io_types": { 00:28:48.484 "read": true, 00:28:48.484 "write": true, 00:28:48.484 "unmap": true, 00:28:48.484 "flush": true, 00:28:48.484 "reset": true, 00:28:48.484 "nvme_admin": false, 00:28:48.484 "nvme_io": false, 00:28:48.484 "nvme_io_md": false, 00:28:48.484 "write_zeroes": true, 00:28:48.484 "zcopy": true, 00:28:48.484 "get_zone_info": false, 00:28:48.484 "zone_management": false, 00:28:48.484 "zone_append": false, 00:28:48.484 "compare": false, 00:28:48.484 "compare_and_write": false, 00:28:48.484 "abort": true, 00:28:48.484 "seek_hole": false, 00:28:48.484 "seek_data": false, 00:28:48.484 "copy": true, 00:28:48.484 "nvme_iov_md": false 00:28:48.484 }, 00:28:48.484 "memory_domains": [ 00:28:48.484 { 00:28:48.484 "dma_device_id": "system", 00:28:48.484 "dma_device_type": 1 00:28:48.484 }, 00:28:48.484 { 00:28:48.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:48.484 "dma_device_type": 2 00:28:48.484 } 00:28:48.484 ], 00:28:48.484 "driver_specific": {} 00:28:48.484 } 00:28:48.484 ] 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.484 "name": "Existed_Raid", 00:28:48.484 "uuid": "f539d77a-732c-4105-bd44-d9be84d0d563", 00:28:48.484 "strip_size_kb": 64, 00:28:48.484 "state": "configuring", 00:28:48.484 "raid_level": "raid5f", 00:28:48.484 "superblock": true, 00:28:48.484 "num_base_bdevs": 4, 00:28:48.484 "num_base_bdevs_discovered": 1, 00:28:48.484 "num_base_bdevs_operational": 4, 00:28:48.484 "base_bdevs_list": [ 00:28:48.484 { 00:28:48.484 "name": "BaseBdev1", 00:28:48.484 "uuid": "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67", 00:28:48.484 "is_configured": true, 00:28:48.484 "data_offset": 2048, 00:28:48.484 "data_size": 63488 00:28:48.484 }, 00:28:48.484 { 00:28:48.484 "name": "BaseBdev2", 00:28:48.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.484 "is_configured": false, 00:28:48.484 "data_offset": 0, 00:28:48.484 "data_size": 0 00:28:48.484 }, 00:28:48.484 { 00:28:48.484 "name": "BaseBdev3", 00:28:48.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.484 "is_configured": false, 00:28:48.484 "data_offset": 0, 00:28:48.484 "data_size": 0 00:28:48.484 }, 00:28:48.484 { 00:28:48.484 "name": "BaseBdev4", 00:28:48.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.484 "is_configured": false, 00:28:48.484 "data_offset": 0, 00:28:48.484 "data_size": 0 00:28:48.484 } 00:28:48.484 ] 00:28:48.484 }' 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.484 15:22:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.051 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:49.051 [2024-07-23 15:22:44.339963] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:49.051 [2024-07-23 15:22:44.340045] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:28:49.051 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:49.310 [2024-07-23 15:22:44.520084] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:49.310 [2024-07-23 15:22:44.522587] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:49.310 [2024-07-23 15:22:44.522653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:49.310 [2024-07-23 15:22:44.522665] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:49.310 [2024-07-23 15:22:44.522678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:49.310 [2024-07-23 15:22:44.522686] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:49.310 [2024-07-23 15:22:44.522699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:49.310 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.568 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:49.568 "name": "Existed_Raid", 00:28:49.568 "uuid": "96f3aa7d-1a4d-4a13-8c8b-5e6ca6ef24f2", 00:28:49.568 "strip_size_kb": 64, 00:28:49.568 "state": "configuring", 00:28:49.568 "raid_level": "raid5f", 00:28:49.568 "superblock": true, 00:28:49.568 "num_base_bdevs": 4, 00:28:49.568 "num_base_bdevs_discovered": 1, 00:28:49.568 "num_base_bdevs_operational": 4, 00:28:49.568 "base_bdevs_list": [ 00:28:49.568 { 00:28:49.568 "name": "BaseBdev1", 00:28:49.568 "uuid": "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67", 00:28:49.568 "is_configured": true, 00:28:49.568 "data_offset": 2048, 00:28:49.568 "data_size": 63488 00:28:49.568 }, 00:28:49.568 { 00:28:49.568 "name": "BaseBdev2", 00:28:49.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.568 "is_configured": false, 00:28:49.568 "data_offset": 0, 00:28:49.568 "data_size": 0 00:28:49.568 }, 00:28:49.568 { 00:28:49.568 "name": "BaseBdev3", 00:28:49.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.568 "is_configured": false, 00:28:49.568 "data_offset": 0, 00:28:49.568 "data_size": 0 00:28:49.568 }, 00:28:49.568 { 00:28:49.568 "name": "BaseBdev4", 00:28:49.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.568 "is_configured": false, 00:28:49.568 "data_offset": 0, 00:28:49.568 "data_size": 0 00:28:49.568 } 00:28:49.568 ] 00:28:49.568 }' 00:28:49.568 15:22:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:49.568 15:22:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.826 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:49.826 [2024-07-23 15:22:45.233252] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:49.826 BaseBdev2 00:28:49.826 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:49.826 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:28:49.826 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:49.826 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:49.826 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:49.826 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:49.826 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:50.084 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:50.343 [ 00:28:50.343 { 00:28:50.343 "name": "BaseBdev2", 00:28:50.343 "aliases": [ 00:28:50.343 "6ae14915-83ac-4b95-9b8c-58b03c9c7bae" 00:28:50.343 ], 00:28:50.343 "product_name": "Malloc disk", 00:28:50.343 "block_size": 512, 00:28:50.343 "num_blocks": 65536, 00:28:50.343 "uuid": "6ae14915-83ac-4b95-9b8c-58b03c9c7bae", 00:28:50.343 "assigned_rate_limits": { 00:28:50.343 "rw_ios_per_sec": 0, 00:28:50.343 "rw_mbytes_per_sec": 0, 00:28:50.343 "r_mbytes_per_sec": 0, 00:28:50.343 "w_mbytes_per_sec": 0 00:28:50.343 }, 00:28:50.343 "claimed": true, 00:28:50.343 "claim_type": "exclusive_write", 00:28:50.343 "zoned": false, 00:28:50.343 "supported_io_types": { 00:28:50.343 "read": true, 00:28:50.343 "write": true, 00:28:50.343 "unmap": true, 00:28:50.343 "flush": true, 00:28:50.343 "reset": true, 00:28:50.343 "nvme_admin": false, 00:28:50.343 "nvme_io": false, 00:28:50.343 "nvme_io_md": false, 00:28:50.343 "write_zeroes": true, 00:28:50.343 "zcopy": true, 00:28:50.343 "get_zone_info": false, 00:28:50.343 "zone_management": false, 00:28:50.343 "zone_append": false, 00:28:50.343 "compare": false, 00:28:50.343 "compare_and_write": false, 00:28:50.343 "abort": true, 00:28:50.343 "seek_hole": false, 00:28:50.343 "seek_data": false, 00:28:50.343 "copy": true, 00:28:50.343 "nvme_iov_md": false 00:28:50.343 }, 00:28:50.343 "memory_domains": [ 00:28:50.343 { 00:28:50.343 "dma_device_id": "system", 00:28:50.343 "dma_device_type": 1 00:28:50.343 }, 00:28:50.343 { 00:28:50.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:50.343 "dma_device_type": 2 00:28:50.343 } 00:28:50.343 ], 00:28:50.343 "driver_specific": {} 00:28:50.343 } 00:28:50.343 ] 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.343 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:50.602 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:50.602 "name": "Existed_Raid", 00:28:50.602 "uuid": "96f3aa7d-1a4d-4a13-8c8b-5e6ca6ef24f2", 00:28:50.602 "strip_size_kb": 64, 00:28:50.602 "state": "configuring", 00:28:50.602 "raid_level": "raid5f", 00:28:50.602 "superblock": true, 00:28:50.602 "num_base_bdevs": 4, 00:28:50.602 "num_base_bdevs_discovered": 2, 00:28:50.602 "num_base_bdevs_operational": 4, 00:28:50.602 "base_bdevs_list": [ 00:28:50.602 { 00:28:50.602 "name": "BaseBdev1", 00:28:50.602 "uuid": "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67", 00:28:50.602 "is_configured": true, 00:28:50.602 "data_offset": 2048, 00:28:50.602 "data_size": 63488 00:28:50.602 }, 00:28:50.602 { 00:28:50.602 "name": "BaseBdev2", 00:28:50.602 "uuid": "6ae14915-83ac-4b95-9b8c-58b03c9c7bae", 00:28:50.602 "is_configured": true, 00:28:50.602 "data_offset": 2048, 00:28:50.602 "data_size": 63488 00:28:50.602 }, 00:28:50.602 { 00:28:50.602 "name": "BaseBdev3", 00:28:50.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.602 "is_configured": false, 00:28:50.602 "data_offset": 0, 00:28:50.602 "data_size": 0 00:28:50.602 }, 00:28:50.602 { 00:28:50.602 "name": "BaseBdev4", 00:28:50.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.602 "is_configured": false, 00:28:50.602 "data_offset": 0, 00:28:50.602 "data_size": 0 00:28:50.602 } 00:28:50.602 ] 00:28:50.602 }' 00:28:50.602 15:22:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:50.602 15:22:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.861 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:51.119 [2024-07-23 15:22:46.394209] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:51.119 BaseBdev3 00:28:51.119 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:51.119 15:22:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:28:51.119 15:22:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:51.119 15:22:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:51.119 15:22:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:51.119 15:22:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:51.119 15:22:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:51.378 15:22:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:51.636 [ 00:28:51.636 { 00:28:51.636 "name": "BaseBdev3", 00:28:51.636 "aliases": [ 00:28:51.636 "ecf39537-3d8c-405f-a4a3-2569e27578eb" 00:28:51.636 ], 00:28:51.636 "product_name": "Malloc disk", 00:28:51.636 "block_size": 512, 00:28:51.636 "num_blocks": 65536, 00:28:51.636 "uuid": "ecf39537-3d8c-405f-a4a3-2569e27578eb", 00:28:51.636 "assigned_rate_limits": { 00:28:51.636 "rw_ios_per_sec": 0, 00:28:51.636 "rw_mbytes_per_sec": 0, 00:28:51.636 "r_mbytes_per_sec": 0, 00:28:51.636 "w_mbytes_per_sec": 0 00:28:51.636 }, 00:28:51.636 "claimed": true, 00:28:51.636 "claim_type": "exclusive_write", 00:28:51.636 "zoned": false, 00:28:51.637 "supported_io_types": { 00:28:51.637 "read": true, 00:28:51.637 "write": true, 00:28:51.637 "unmap": true, 00:28:51.637 "flush": true, 00:28:51.637 "reset": true, 00:28:51.637 "nvme_admin": false, 00:28:51.637 "nvme_io": false, 00:28:51.637 "nvme_io_md": false, 00:28:51.637 "write_zeroes": true, 00:28:51.637 "zcopy": true, 00:28:51.637 "get_zone_info": false, 00:28:51.637 "zone_management": false, 00:28:51.637 "zone_append": false, 00:28:51.637 "compare": false, 00:28:51.637 "compare_and_write": false, 00:28:51.637 "abort": true, 00:28:51.637 "seek_hole": false, 00:28:51.637 "seek_data": false, 00:28:51.637 "copy": true, 00:28:51.637 "nvme_iov_md": false 00:28:51.637 }, 00:28:51.637 "memory_domains": [ 00:28:51.637 { 00:28:51.637 "dma_device_id": "system", 00:28:51.637 "dma_device_type": 1 00:28:51.637 }, 00:28:51.637 { 00:28:51.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:51.637 "dma_device_type": 2 00:28:51.637 } 00:28:51.637 ], 00:28:51.637 "driver_specific": {} 00:28:51.637 } 00:28:51.637 ] 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.637 15:22:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:51.895 15:22:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:51.895 "name": "Existed_Raid", 00:28:51.895 "uuid": "96f3aa7d-1a4d-4a13-8c8b-5e6ca6ef24f2", 00:28:51.895 "strip_size_kb": 64, 00:28:51.895 "state": "configuring", 00:28:51.895 "raid_level": "raid5f", 00:28:51.895 "superblock": true, 00:28:51.895 "num_base_bdevs": 4, 00:28:51.895 "num_base_bdevs_discovered": 3, 00:28:51.895 "num_base_bdevs_operational": 4, 00:28:51.895 "base_bdevs_list": [ 00:28:51.895 { 00:28:51.895 "name": "BaseBdev1", 00:28:51.895 "uuid": "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67", 00:28:51.895 "is_configured": true, 00:28:51.895 "data_offset": 2048, 00:28:51.895 "data_size": 63488 00:28:51.895 }, 00:28:51.895 { 00:28:51.895 "name": "BaseBdev2", 00:28:51.895 "uuid": "6ae14915-83ac-4b95-9b8c-58b03c9c7bae", 00:28:51.895 "is_configured": true, 00:28:51.895 "data_offset": 2048, 00:28:51.895 "data_size": 63488 00:28:51.895 }, 00:28:51.895 { 00:28:51.895 "name": "BaseBdev3", 00:28:51.895 "uuid": "ecf39537-3d8c-405f-a4a3-2569e27578eb", 00:28:51.895 "is_configured": true, 00:28:51.895 "data_offset": 2048, 00:28:51.895 "data_size": 63488 00:28:51.895 }, 00:28:51.895 { 00:28:51.895 "name": "BaseBdev4", 00:28:51.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.895 "is_configured": false, 00:28:51.895 "data_offset": 0, 00:28:51.895 "data_size": 0 00:28:51.895 } 00:28:51.895 ] 00:28:51.895 }' 00:28:51.895 15:22:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:51.895 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.162 15:22:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:52.162 [2024-07-23 15:22:47.582148] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:52.162 [2024-07-23 15:22:47.582415] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:28:52.162 [2024-07-23 15:22:47.582444] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:52.163 [2024-07-23 15:22:47.582559] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:28:52.163 [2024-07-23 15:22:47.583385] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:28:52.163 [2024-07-23 15:22:47.583415] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:28:52.163 [2024-07-23 15:22:47.583540] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:52.163 BaseBdev4 00:28:52.437 15:22:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:28:52.437 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:28:52.437 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:52.437 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:52.437 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:52.437 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:52.437 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:52.437 15:22:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:52.695 [ 00:28:52.695 { 00:28:52.695 "name": "BaseBdev4", 00:28:52.695 "aliases": [ 00:28:52.695 "96f364a4-c1ae-433f-9071-7a8b97862452" 00:28:52.695 ], 00:28:52.695 "product_name": "Malloc disk", 00:28:52.695 "block_size": 512, 00:28:52.695 "num_blocks": 65536, 00:28:52.695 "uuid": "96f364a4-c1ae-433f-9071-7a8b97862452", 00:28:52.695 "assigned_rate_limits": { 00:28:52.695 "rw_ios_per_sec": 0, 00:28:52.695 "rw_mbytes_per_sec": 0, 00:28:52.695 "r_mbytes_per_sec": 0, 00:28:52.695 "w_mbytes_per_sec": 0 00:28:52.695 }, 00:28:52.695 "claimed": true, 00:28:52.695 "claim_type": "exclusive_write", 00:28:52.695 "zoned": false, 00:28:52.695 "supported_io_types": { 00:28:52.695 "read": true, 00:28:52.695 "write": true, 00:28:52.695 "unmap": true, 00:28:52.695 "flush": true, 00:28:52.695 "reset": true, 00:28:52.695 "nvme_admin": false, 00:28:52.695 "nvme_io": false, 00:28:52.695 "nvme_io_md": false, 00:28:52.695 "write_zeroes": true, 00:28:52.695 "zcopy": true, 00:28:52.695 "get_zone_info": false, 00:28:52.695 "zone_management": false, 00:28:52.695 "zone_append": false, 00:28:52.695 "compare": false, 00:28:52.695 "compare_and_write": false, 00:28:52.695 "abort": true, 00:28:52.695 "seek_hole": false, 00:28:52.695 "seek_data": false, 00:28:52.695 "copy": true, 00:28:52.695 "nvme_iov_md": false 00:28:52.695 }, 00:28:52.695 "memory_domains": [ 00:28:52.695 { 00:28:52.695 "dma_device_id": "system", 00:28:52.695 "dma_device_type": 1 00:28:52.695 }, 00:28:52.695 { 00:28:52.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:52.695 "dma_device_type": 2 00:28:52.695 } 00:28:52.695 ], 00:28:52.695 "driver_specific": {} 00:28:52.695 } 00:28:52.695 ] 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.695 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:52.953 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:52.953 "name": "Existed_Raid", 00:28:52.953 "uuid": "96f3aa7d-1a4d-4a13-8c8b-5e6ca6ef24f2", 00:28:52.953 "strip_size_kb": 64, 00:28:52.953 "state": "online", 00:28:52.953 "raid_level": "raid5f", 00:28:52.953 "superblock": true, 00:28:52.953 "num_base_bdevs": 4, 00:28:52.953 "num_base_bdevs_discovered": 4, 00:28:52.953 "num_base_bdevs_operational": 4, 00:28:52.953 "base_bdevs_list": [ 00:28:52.953 { 00:28:52.953 "name": "BaseBdev1", 00:28:52.953 "uuid": "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67", 00:28:52.953 "is_configured": true, 00:28:52.953 "data_offset": 2048, 00:28:52.953 "data_size": 63488 00:28:52.953 }, 00:28:52.953 { 00:28:52.953 "name": "BaseBdev2", 00:28:52.953 "uuid": "6ae14915-83ac-4b95-9b8c-58b03c9c7bae", 00:28:52.953 "is_configured": true, 00:28:52.953 "data_offset": 2048, 00:28:52.953 "data_size": 63488 00:28:52.953 }, 00:28:52.953 { 00:28:52.953 "name": "BaseBdev3", 00:28:52.953 "uuid": "ecf39537-3d8c-405f-a4a3-2569e27578eb", 00:28:52.953 "is_configured": true, 00:28:52.953 "data_offset": 2048, 00:28:52.953 "data_size": 63488 00:28:52.953 }, 00:28:52.953 { 00:28:52.953 "name": "BaseBdev4", 00:28:52.953 "uuid": "96f364a4-c1ae-433f-9071-7a8b97862452", 00:28:52.953 "is_configured": true, 00:28:52.953 "data_offset": 2048, 00:28:52.954 "data_size": 63488 00:28:52.954 } 00:28:52.954 ] 00:28:52.954 }' 00:28:52.954 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:52.954 15:22:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.211 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:53.211 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:53.211 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:53.211 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:53.211 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:53.211 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:53.211 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:53.211 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:53.470 [2024-07-23 15:22:48.730717] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:53.470 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:53.470 "name": "Existed_Raid", 00:28:53.470 "aliases": [ 00:28:53.470 "96f3aa7d-1a4d-4a13-8c8b-5e6ca6ef24f2" 00:28:53.470 ], 00:28:53.470 "product_name": "Raid Volume", 00:28:53.470 "block_size": 512, 00:28:53.470 "num_blocks": 190464, 00:28:53.470 "uuid": "96f3aa7d-1a4d-4a13-8c8b-5e6ca6ef24f2", 00:28:53.470 "assigned_rate_limits": { 00:28:53.470 "rw_ios_per_sec": 0, 00:28:53.470 "rw_mbytes_per_sec": 0, 00:28:53.470 "r_mbytes_per_sec": 0, 00:28:53.470 "w_mbytes_per_sec": 0 00:28:53.470 }, 00:28:53.470 "claimed": false, 00:28:53.470 "zoned": false, 00:28:53.470 "supported_io_types": { 00:28:53.470 "read": true, 00:28:53.470 "write": true, 00:28:53.470 "unmap": false, 00:28:53.470 "flush": false, 00:28:53.470 "reset": true, 00:28:53.470 "nvme_admin": false, 00:28:53.470 "nvme_io": false, 00:28:53.470 "nvme_io_md": false, 00:28:53.470 "write_zeroes": true, 00:28:53.470 "zcopy": false, 00:28:53.470 "get_zone_info": false, 00:28:53.470 "zone_management": false, 00:28:53.470 "zone_append": false, 00:28:53.470 "compare": false, 00:28:53.470 "compare_and_write": false, 00:28:53.470 "abort": false, 00:28:53.470 "seek_hole": false, 00:28:53.470 "seek_data": false, 00:28:53.470 "copy": false, 00:28:53.470 "nvme_iov_md": false 00:28:53.470 }, 00:28:53.470 "driver_specific": { 00:28:53.470 "raid": { 00:28:53.470 "uuid": "96f3aa7d-1a4d-4a13-8c8b-5e6ca6ef24f2", 00:28:53.470 "strip_size_kb": 64, 00:28:53.470 "state": "online", 00:28:53.470 "raid_level": "raid5f", 00:28:53.470 "superblock": true, 00:28:53.470 "num_base_bdevs": 4, 00:28:53.470 "num_base_bdevs_discovered": 4, 00:28:53.470 "num_base_bdevs_operational": 4, 00:28:53.470 "base_bdevs_list": [ 00:28:53.470 { 00:28:53.470 "name": "BaseBdev1", 00:28:53.470 "uuid": "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67", 00:28:53.470 "is_configured": true, 00:28:53.470 "data_offset": 2048, 00:28:53.470 "data_size": 63488 00:28:53.470 }, 00:28:53.470 { 00:28:53.470 "name": "BaseBdev2", 00:28:53.470 "uuid": "6ae14915-83ac-4b95-9b8c-58b03c9c7bae", 00:28:53.470 "is_configured": true, 00:28:53.470 "data_offset": 2048, 00:28:53.470 "data_size": 63488 00:28:53.470 }, 00:28:53.470 { 00:28:53.470 "name": "BaseBdev3", 00:28:53.470 "uuid": "ecf39537-3d8c-405f-a4a3-2569e27578eb", 00:28:53.470 "is_configured": true, 00:28:53.470 "data_offset": 2048, 00:28:53.470 "data_size": 63488 00:28:53.470 }, 00:28:53.470 { 00:28:53.470 "name": "BaseBdev4", 00:28:53.470 "uuid": "96f364a4-c1ae-433f-9071-7a8b97862452", 00:28:53.470 "is_configured": true, 00:28:53.470 "data_offset": 2048, 00:28:53.470 "data_size": 63488 00:28:53.470 } 00:28:53.470 ] 00:28:53.470 } 00:28:53.470 } 00:28:53.470 }' 00:28:53.470 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:53.470 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:53.470 BaseBdev2 00:28:53.470 BaseBdev3 00:28:53.470 BaseBdev4' 00:28:53.470 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:53.470 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:53.470 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:53.728 15:22:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:53.728 "name": "BaseBdev1", 00:28:53.728 "aliases": [ 00:28:53.728 "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67" 00:28:53.728 ], 00:28:53.728 "product_name": "Malloc disk", 00:28:53.728 "block_size": 512, 00:28:53.728 "num_blocks": 65536, 00:28:53.728 "uuid": "cf726f9e-dd7f-4732-81fc-fc7ad1d32b67", 00:28:53.728 "assigned_rate_limits": { 00:28:53.728 "rw_ios_per_sec": 0, 00:28:53.728 "rw_mbytes_per_sec": 0, 00:28:53.728 "r_mbytes_per_sec": 0, 00:28:53.728 "w_mbytes_per_sec": 0 00:28:53.728 }, 00:28:53.728 "claimed": true, 00:28:53.728 "claim_type": "exclusive_write", 00:28:53.728 "zoned": false, 00:28:53.728 "supported_io_types": { 00:28:53.728 "read": true, 00:28:53.728 "write": true, 00:28:53.728 "unmap": true, 00:28:53.728 "flush": true, 00:28:53.728 "reset": true, 00:28:53.728 "nvme_admin": false, 00:28:53.728 "nvme_io": false, 00:28:53.728 "nvme_io_md": false, 00:28:53.728 "write_zeroes": true, 00:28:53.728 "zcopy": true, 00:28:53.728 "get_zone_info": false, 00:28:53.729 "zone_management": false, 00:28:53.729 "zone_append": false, 00:28:53.729 "compare": false, 00:28:53.729 "compare_and_write": false, 00:28:53.729 "abort": true, 00:28:53.729 "seek_hole": false, 00:28:53.729 "seek_data": false, 00:28:53.729 "copy": true, 00:28:53.729 "nvme_iov_md": false 00:28:53.729 }, 00:28:53.729 "memory_domains": [ 00:28:53.729 { 00:28:53.729 "dma_device_id": "system", 00:28:53.729 "dma_device_type": 1 00:28:53.729 }, 00:28:53.729 { 00:28:53.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:53.729 "dma_device_type": 2 00:28:53.729 } 00:28:53.729 ], 00:28:53.729 "driver_specific": {} 00:28:53.729 }' 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:53.729 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:53.988 "name": "BaseBdev2", 00:28:53.988 "aliases": [ 00:28:53.988 "6ae14915-83ac-4b95-9b8c-58b03c9c7bae" 00:28:53.988 ], 00:28:53.988 "product_name": "Malloc disk", 00:28:53.988 "block_size": 512, 00:28:53.988 "num_blocks": 65536, 00:28:53.988 "uuid": "6ae14915-83ac-4b95-9b8c-58b03c9c7bae", 00:28:53.988 "assigned_rate_limits": { 00:28:53.988 "rw_ios_per_sec": 0, 00:28:53.988 "rw_mbytes_per_sec": 0, 00:28:53.988 "r_mbytes_per_sec": 0, 00:28:53.988 "w_mbytes_per_sec": 0 00:28:53.988 }, 00:28:53.988 "claimed": true, 00:28:53.988 "claim_type": "exclusive_write", 00:28:53.988 "zoned": false, 00:28:53.988 "supported_io_types": { 00:28:53.988 "read": true, 00:28:53.988 "write": true, 00:28:53.988 "unmap": true, 00:28:53.988 "flush": true, 00:28:53.988 "reset": true, 00:28:53.988 "nvme_admin": false, 00:28:53.988 "nvme_io": false, 00:28:53.988 "nvme_io_md": false, 00:28:53.988 "write_zeroes": true, 00:28:53.988 "zcopy": true, 00:28:53.988 "get_zone_info": false, 00:28:53.988 "zone_management": false, 00:28:53.988 "zone_append": false, 00:28:53.988 "compare": false, 00:28:53.988 "compare_and_write": false, 00:28:53.988 "abort": true, 00:28:53.988 "seek_hole": false, 00:28:53.988 "seek_data": false, 00:28:53.988 "copy": true, 00:28:53.988 "nvme_iov_md": false 00:28:53.988 }, 00:28:53.988 "memory_domains": [ 00:28:53.988 { 00:28:53.988 "dma_device_id": "system", 00:28:53.988 "dma_device_type": 1 00:28:53.988 }, 00:28:53.988 { 00:28:53.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:53.988 "dma_device_type": 2 00:28:53.988 } 00:28:53.988 ], 00:28:53.988 "driver_specific": {} 00:28:53.988 }' 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:53.988 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:54.247 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:54.247 "name": "BaseBdev3", 00:28:54.247 "aliases": [ 00:28:54.247 "ecf39537-3d8c-405f-a4a3-2569e27578eb" 00:28:54.247 ], 00:28:54.247 "product_name": "Malloc disk", 00:28:54.247 "block_size": 512, 00:28:54.247 "num_blocks": 65536, 00:28:54.247 "uuid": "ecf39537-3d8c-405f-a4a3-2569e27578eb", 00:28:54.247 "assigned_rate_limits": { 00:28:54.247 "rw_ios_per_sec": 0, 00:28:54.247 "rw_mbytes_per_sec": 0, 00:28:54.247 "r_mbytes_per_sec": 0, 00:28:54.247 "w_mbytes_per_sec": 0 00:28:54.247 }, 00:28:54.247 "claimed": true, 00:28:54.247 "claim_type": "exclusive_write", 00:28:54.247 "zoned": false, 00:28:54.247 "supported_io_types": { 00:28:54.247 "read": true, 00:28:54.247 "write": true, 00:28:54.247 "unmap": true, 00:28:54.247 "flush": true, 00:28:54.247 "reset": true, 00:28:54.247 "nvme_admin": false, 00:28:54.247 "nvme_io": false, 00:28:54.247 "nvme_io_md": false, 00:28:54.247 "write_zeroes": true, 00:28:54.247 "zcopy": true, 00:28:54.247 "get_zone_info": false, 00:28:54.247 "zone_management": false, 00:28:54.247 "zone_append": false, 00:28:54.247 "compare": false, 00:28:54.247 "compare_and_write": false, 00:28:54.247 "abort": true, 00:28:54.247 "seek_hole": false, 00:28:54.247 "seek_data": false, 00:28:54.247 "copy": true, 00:28:54.247 "nvme_iov_md": false 00:28:54.247 }, 00:28:54.247 "memory_domains": [ 00:28:54.247 { 00:28:54.247 "dma_device_id": "system", 00:28:54.247 "dma_device_type": 1 00:28:54.247 }, 00:28:54.247 { 00:28:54.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:54.247 "dma_device_type": 2 00:28:54.247 } 00:28:54.247 ], 00:28:54.247 "driver_specific": {} 00:28:54.247 }' 00:28:54.247 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:54.247 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:54.247 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:54.247 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:54.247 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:54.248 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:54.507 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:54.507 "name": "BaseBdev4", 00:28:54.507 "aliases": [ 00:28:54.507 "96f364a4-c1ae-433f-9071-7a8b97862452" 00:28:54.507 ], 00:28:54.507 "product_name": "Malloc disk", 00:28:54.507 "block_size": 512, 00:28:54.507 "num_blocks": 65536, 00:28:54.507 "uuid": "96f364a4-c1ae-433f-9071-7a8b97862452", 00:28:54.507 "assigned_rate_limits": { 00:28:54.507 "rw_ios_per_sec": 0, 00:28:54.507 "rw_mbytes_per_sec": 0, 00:28:54.507 "r_mbytes_per_sec": 0, 00:28:54.507 "w_mbytes_per_sec": 0 00:28:54.507 }, 00:28:54.507 "claimed": true, 00:28:54.507 "claim_type": "exclusive_write", 00:28:54.507 "zoned": false, 00:28:54.507 "supported_io_types": { 00:28:54.507 "read": true, 00:28:54.507 "write": true, 00:28:54.507 "unmap": true, 00:28:54.507 "flush": true, 00:28:54.507 "reset": true, 00:28:54.507 "nvme_admin": false, 00:28:54.507 "nvme_io": false, 00:28:54.507 "nvme_io_md": false, 00:28:54.507 "write_zeroes": true, 00:28:54.507 "zcopy": true, 00:28:54.507 "get_zone_info": false, 00:28:54.507 "zone_management": false, 00:28:54.507 "zone_append": false, 00:28:54.507 "compare": false, 00:28:54.507 "compare_and_write": false, 00:28:54.507 "abort": true, 00:28:54.507 "seek_hole": false, 00:28:54.507 "seek_data": false, 00:28:54.507 "copy": true, 00:28:54.507 "nvme_iov_md": false 00:28:54.507 }, 00:28:54.507 "memory_domains": [ 00:28:54.507 { 00:28:54.507 "dma_device_id": "system", 00:28:54.507 "dma_device_type": 1 00:28:54.507 }, 00:28:54.507 { 00:28:54.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:54.507 "dma_device_type": 2 00:28:54.507 } 00:28:54.507 ], 00:28:54.507 "driver_specific": {} 00:28:54.507 }' 00:28:54.507 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:54.507 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:54.507 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:54.507 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:54.766 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:54.766 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:54.766 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:54.766 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:54.766 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:54.766 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:54.766 15:22:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:54.766 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:54.766 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:55.025 [2024-07-23 15:22:50.246972] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.025 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:55.284 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:55.284 "name": "Existed_Raid", 00:28:55.284 "uuid": "96f3aa7d-1a4d-4a13-8c8b-5e6ca6ef24f2", 00:28:55.284 "strip_size_kb": 64, 00:28:55.284 "state": "online", 00:28:55.284 "raid_level": "raid5f", 00:28:55.284 "superblock": true, 00:28:55.284 "num_base_bdevs": 4, 00:28:55.284 "num_base_bdevs_discovered": 3, 00:28:55.284 "num_base_bdevs_operational": 3, 00:28:55.284 "base_bdevs_list": [ 00:28:55.284 { 00:28:55.284 "name": null, 00:28:55.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.284 "is_configured": false, 00:28:55.284 "data_offset": 2048, 00:28:55.284 "data_size": 63488 00:28:55.284 }, 00:28:55.284 { 00:28:55.284 "name": "BaseBdev2", 00:28:55.284 "uuid": "6ae14915-83ac-4b95-9b8c-58b03c9c7bae", 00:28:55.284 "is_configured": true, 00:28:55.284 "data_offset": 2048, 00:28:55.284 "data_size": 63488 00:28:55.284 }, 00:28:55.284 { 00:28:55.284 "name": "BaseBdev3", 00:28:55.284 "uuid": "ecf39537-3d8c-405f-a4a3-2569e27578eb", 00:28:55.284 "is_configured": true, 00:28:55.284 "data_offset": 2048, 00:28:55.284 "data_size": 63488 00:28:55.284 }, 00:28:55.284 { 00:28:55.284 "name": "BaseBdev4", 00:28:55.284 "uuid": "96f364a4-c1ae-433f-9071-7a8b97862452", 00:28:55.284 "is_configured": true, 00:28:55.284 "data_offset": 2048, 00:28:55.284 "data_size": 63488 00:28:55.284 } 00:28:55.284 ] 00:28:55.284 }' 00:28:55.284 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:55.284 15:22:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.543 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:55.543 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:55.543 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.543 15:22:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:55.802 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:55.802 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:55.802 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:56.060 [2024-07-23 15:22:51.407947] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:56.060 [2024-07-23 15:22:51.408124] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:56.060 [2024-07-23 15:22:51.420430] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:56.060 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:56.060 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:56.060 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:56.060 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.319 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:56.319 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:56.319 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:56.578 [2024-07-23 15:22:51.840652] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:56.578 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:56.578 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:56.578 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:56.578 15:22:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.837 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:56.837 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:56.837 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:56.837 [2024-07-23 15:22:52.205203] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:56.837 [2024-07-23 15:22:52.205300] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:28:56.837 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:56.837 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:56.837 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.837 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:57.096 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:57.096 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:57.096 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:28:57.096 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:57.096 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:57.096 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:57.354 BaseBdev2 00:28:57.354 15:22:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:57.354 15:22:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:28:57.354 15:22:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:57.354 15:22:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:57.354 15:22:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:57.354 15:22:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:57.354 15:22:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:57.612 15:22:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:57.612 [ 00:28:57.612 { 00:28:57.612 "name": "BaseBdev2", 00:28:57.612 "aliases": [ 00:28:57.612 "843e0067-16c9-48f2-b1f8-7eb5ebed113f" 00:28:57.612 ], 00:28:57.612 "product_name": "Malloc disk", 00:28:57.612 "block_size": 512, 00:28:57.612 "num_blocks": 65536, 00:28:57.612 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:28:57.612 "assigned_rate_limits": { 00:28:57.612 "rw_ios_per_sec": 0, 00:28:57.612 "rw_mbytes_per_sec": 0, 00:28:57.612 "r_mbytes_per_sec": 0, 00:28:57.612 "w_mbytes_per_sec": 0 00:28:57.612 }, 00:28:57.612 "claimed": false, 00:28:57.612 "zoned": false, 00:28:57.612 "supported_io_types": { 00:28:57.612 "read": true, 00:28:57.612 "write": true, 00:28:57.612 "unmap": true, 00:28:57.612 "flush": true, 00:28:57.612 "reset": true, 00:28:57.612 "nvme_admin": false, 00:28:57.612 "nvme_io": false, 00:28:57.612 "nvme_io_md": false, 00:28:57.612 "write_zeroes": true, 00:28:57.612 "zcopy": true, 00:28:57.612 "get_zone_info": false, 00:28:57.613 "zone_management": false, 00:28:57.613 "zone_append": false, 00:28:57.613 "compare": false, 00:28:57.613 "compare_and_write": false, 00:28:57.613 "abort": true, 00:28:57.613 "seek_hole": false, 00:28:57.613 "seek_data": false, 00:28:57.613 "copy": true, 00:28:57.613 "nvme_iov_md": false 00:28:57.613 }, 00:28:57.613 "memory_domains": [ 00:28:57.613 { 00:28:57.613 "dma_device_id": "system", 00:28:57.613 "dma_device_type": 1 00:28:57.613 }, 00:28:57.613 { 00:28:57.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:57.613 "dma_device_type": 2 00:28:57.613 } 00:28:57.613 ], 00:28:57.613 "driver_specific": {} 00:28:57.613 } 00:28:57.613 ] 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:57.871 BaseBdev3 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:57.871 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:58.137 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:58.403 [ 00:28:58.403 { 00:28:58.403 "name": "BaseBdev3", 00:28:58.403 "aliases": [ 00:28:58.403 "164295b7-92aa-4742-8070-0f53f3d39ff5" 00:28:58.403 ], 00:28:58.403 "product_name": "Malloc disk", 00:28:58.403 "block_size": 512, 00:28:58.403 "num_blocks": 65536, 00:28:58.403 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:28:58.403 "assigned_rate_limits": { 00:28:58.403 "rw_ios_per_sec": 0, 00:28:58.403 "rw_mbytes_per_sec": 0, 00:28:58.403 "r_mbytes_per_sec": 0, 00:28:58.403 "w_mbytes_per_sec": 0 00:28:58.403 }, 00:28:58.403 "claimed": false, 00:28:58.403 "zoned": false, 00:28:58.403 "supported_io_types": { 00:28:58.403 "read": true, 00:28:58.403 "write": true, 00:28:58.403 "unmap": true, 00:28:58.403 "flush": true, 00:28:58.403 "reset": true, 00:28:58.403 "nvme_admin": false, 00:28:58.403 "nvme_io": false, 00:28:58.403 "nvme_io_md": false, 00:28:58.403 "write_zeroes": true, 00:28:58.403 "zcopy": true, 00:28:58.403 "get_zone_info": false, 00:28:58.403 "zone_management": false, 00:28:58.403 "zone_append": false, 00:28:58.403 "compare": false, 00:28:58.403 "compare_and_write": false, 00:28:58.403 "abort": true, 00:28:58.403 "seek_hole": false, 00:28:58.403 "seek_data": false, 00:28:58.403 "copy": true, 00:28:58.403 "nvme_iov_md": false 00:28:58.403 }, 00:28:58.403 "memory_domains": [ 00:28:58.403 { 00:28:58.403 "dma_device_id": "system", 00:28:58.403 "dma_device_type": 1 00:28:58.403 }, 00:28:58.403 { 00:28:58.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.403 "dma_device_type": 2 00:28:58.403 } 00:28:58.403 ], 00:28:58.403 "driver_specific": {} 00:28:58.403 } 00:28:58.403 ] 00:28:58.403 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:58.403 15:22:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:58.403 15:22:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:58.403 15:22:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:58.662 BaseBdev4 00:28:58.662 15:22:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:28:58.662 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:28:58.662 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:58.662 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:58.662 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:58.662 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:58.662 15:22:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:58.662 15:22:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:58.921 [ 00:28:58.921 { 00:28:58.921 "name": "BaseBdev4", 00:28:58.921 "aliases": [ 00:28:58.921 "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67" 00:28:58.921 ], 00:28:58.921 "product_name": "Malloc disk", 00:28:58.921 "block_size": 512, 00:28:58.921 "num_blocks": 65536, 00:28:58.921 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:28:58.921 "assigned_rate_limits": { 00:28:58.921 "rw_ios_per_sec": 0, 00:28:58.921 "rw_mbytes_per_sec": 0, 00:28:58.921 "r_mbytes_per_sec": 0, 00:28:58.921 "w_mbytes_per_sec": 0 00:28:58.921 }, 00:28:58.921 "claimed": false, 00:28:58.921 "zoned": false, 00:28:58.921 "supported_io_types": { 00:28:58.921 "read": true, 00:28:58.921 "write": true, 00:28:58.921 "unmap": true, 00:28:58.921 "flush": true, 00:28:58.921 "reset": true, 00:28:58.921 "nvme_admin": false, 00:28:58.921 "nvme_io": false, 00:28:58.921 "nvme_io_md": false, 00:28:58.921 "write_zeroes": true, 00:28:58.921 "zcopy": true, 00:28:58.921 "get_zone_info": false, 00:28:58.921 "zone_management": false, 00:28:58.921 "zone_append": false, 00:28:58.921 "compare": false, 00:28:58.921 "compare_and_write": false, 00:28:58.921 "abort": true, 00:28:58.921 "seek_hole": false, 00:28:58.921 "seek_data": false, 00:28:58.921 "copy": true, 00:28:58.921 "nvme_iov_md": false 00:28:58.921 }, 00:28:58.921 "memory_domains": [ 00:28:58.921 { 00:28:58.921 "dma_device_id": "system", 00:28:58.921 "dma_device_type": 1 00:28:58.921 }, 00:28:58.921 { 00:28:58.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.921 "dma_device_type": 2 00:28:58.921 } 00:28:58.921 ], 00:28:58.921 "driver_specific": {} 00:28:58.921 } 00:28:58.921 ] 00:28:58.921 15:22:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:58.921 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:58.921 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:58.921 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:59.179 [2024-07-23 15:22:54.391003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:59.179 [2024-07-23 15:22:54.391075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:59.179 [2024-07-23 15:22:54.391113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:59.179 [2024-07-23 15:22:54.393284] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:59.179 [2024-07-23 15:22:54.393340] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.179 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:59.438 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:59.438 "name": "Existed_Raid", 00:28:59.438 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:28:59.438 "strip_size_kb": 64, 00:28:59.438 "state": "configuring", 00:28:59.438 "raid_level": "raid5f", 00:28:59.438 "superblock": true, 00:28:59.438 "num_base_bdevs": 4, 00:28:59.438 "num_base_bdevs_discovered": 3, 00:28:59.438 "num_base_bdevs_operational": 4, 00:28:59.438 "base_bdevs_list": [ 00:28:59.438 { 00:28:59.438 "name": "BaseBdev1", 00:28:59.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.438 "is_configured": false, 00:28:59.438 "data_offset": 0, 00:28:59.438 "data_size": 0 00:28:59.438 }, 00:28:59.438 { 00:28:59.438 "name": "BaseBdev2", 00:28:59.438 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:28:59.438 "is_configured": true, 00:28:59.438 "data_offset": 2048, 00:28:59.438 "data_size": 63488 00:28:59.438 }, 00:28:59.438 { 00:28:59.438 "name": "BaseBdev3", 00:28:59.438 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:28:59.438 "is_configured": true, 00:28:59.438 "data_offset": 2048, 00:28:59.438 "data_size": 63488 00:28:59.438 }, 00:28:59.438 { 00:28:59.438 "name": "BaseBdev4", 00:28:59.438 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:28:59.438 "is_configured": true, 00:28:59.438 "data_offset": 2048, 00:28:59.438 "data_size": 63488 00:28:59.438 } 00:28:59.438 ] 00:28:59.438 }' 00:28:59.438 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:59.438 15:22:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.697 15:22:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:59.697 [2024-07-23 15:22:55.087081] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.697 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:59.957 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:59.957 "name": "Existed_Raid", 00:28:59.957 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:28:59.957 "strip_size_kb": 64, 00:28:59.957 "state": "configuring", 00:28:59.957 "raid_level": "raid5f", 00:28:59.957 "superblock": true, 00:28:59.957 "num_base_bdevs": 4, 00:28:59.957 "num_base_bdevs_discovered": 2, 00:28:59.957 "num_base_bdevs_operational": 4, 00:28:59.957 "base_bdevs_list": [ 00:28:59.957 { 00:28:59.957 "name": "BaseBdev1", 00:28:59.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.957 "is_configured": false, 00:28:59.957 "data_offset": 0, 00:28:59.957 "data_size": 0 00:28:59.957 }, 00:28:59.957 { 00:28:59.957 "name": null, 00:28:59.957 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:28:59.957 "is_configured": false, 00:28:59.957 "data_offset": 2048, 00:28:59.957 "data_size": 63488 00:28:59.957 }, 00:28:59.957 { 00:28:59.957 "name": "BaseBdev3", 00:28:59.957 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:28:59.957 "is_configured": true, 00:28:59.957 "data_offset": 2048, 00:28:59.957 "data_size": 63488 00:28:59.957 }, 00:28:59.957 { 00:28:59.957 "name": "BaseBdev4", 00:28:59.957 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:28:59.957 "is_configured": true, 00:28:59.957 "data_offset": 2048, 00:28:59.957 "data_size": 63488 00:28:59.957 } 00:28:59.957 ] 00:28:59.957 }' 00:28:59.957 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:59.957 15:22:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:00.215 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:00.215 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.474 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:29:00.474 15:22:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:29:00.733 [2024-07-23 15:22:55.990679] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:00.733 BaseBdev1 00:29:00.733 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:29:00.733 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:29:00.733 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:00.733 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:29:00.733 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:00.733 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:00.733 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:00.992 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:00.992 [ 00:29:00.992 { 00:29:00.992 "name": "BaseBdev1", 00:29:00.992 "aliases": [ 00:29:00.993 "8449f235-1d85-447e-9220-50ae709f12e5" 00:29:00.993 ], 00:29:00.993 "product_name": "Malloc disk", 00:29:00.993 "block_size": 512, 00:29:00.993 "num_blocks": 65536, 00:29:00.993 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:00.993 "assigned_rate_limits": { 00:29:00.993 "rw_ios_per_sec": 0, 00:29:00.993 "rw_mbytes_per_sec": 0, 00:29:00.993 "r_mbytes_per_sec": 0, 00:29:00.993 "w_mbytes_per_sec": 0 00:29:00.993 }, 00:29:00.993 "claimed": true, 00:29:00.993 "claim_type": "exclusive_write", 00:29:00.993 "zoned": false, 00:29:00.993 "supported_io_types": { 00:29:00.993 "read": true, 00:29:00.993 "write": true, 00:29:00.993 "unmap": true, 00:29:00.993 "flush": true, 00:29:00.993 "reset": true, 00:29:00.993 "nvme_admin": false, 00:29:00.993 "nvme_io": false, 00:29:00.993 "nvme_io_md": false, 00:29:00.993 "write_zeroes": true, 00:29:00.993 "zcopy": true, 00:29:00.993 "get_zone_info": false, 00:29:00.993 "zone_management": false, 00:29:00.993 "zone_append": false, 00:29:00.993 "compare": false, 00:29:00.993 "compare_and_write": false, 00:29:00.993 "abort": true, 00:29:00.993 "seek_hole": false, 00:29:00.993 "seek_data": false, 00:29:00.993 "copy": true, 00:29:00.993 "nvme_iov_md": false 00:29:00.993 }, 00:29:00.993 "memory_domains": [ 00:29:00.993 { 00:29:00.993 "dma_device_id": "system", 00:29:00.993 "dma_device_type": 1 00:29:00.993 }, 00:29:00.993 { 00:29:00.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:00.993 "dma_device_type": 2 00:29:00.993 } 00:29:00.993 ], 00:29:00.993 "driver_specific": {} 00:29:00.993 } 00:29:00.993 ] 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:01.252 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:01.253 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:01.253 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:01.253 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.253 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:01.253 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:01.253 "name": "Existed_Raid", 00:29:01.253 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:29:01.253 "strip_size_kb": 64, 00:29:01.253 "state": "configuring", 00:29:01.253 "raid_level": "raid5f", 00:29:01.253 "superblock": true, 00:29:01.253 "num_base_bdevs": 4, 00:29:01.253 "num_base_bdevs_discovered": 3, 00:29:01.253 "num_base_bdevs_operational": 4, 00:29:01.253 "base_bdevs_list": [ 00:29:01.253 { 00:29:01.253 "name": "BaseBdev1", 00:29:01.253 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:01.253 "is_configured": true, 00:29:01.253 "data_offset": 2048, 00:29:01.253 "data_size": 63488 00:29:01.253 }, 00:29:01.253 { 00:29:01.253 "name": null, 00:29:01.253 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:29:01.253 "is_configured": false, 00:29:01.253 "data_offset": 2048, 00:29:01.253 "data_size": 63488 00:29:01.253 }, 00:29:01.253 { 00:29:01.253 "name": "BaseBdev3", 00:29:01.253 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:29:01.253 "is_configured": true, 00:29:01.253 "data_offset": 2048, 00:29:01.253 "data_size": 63488 00:29:01.253 }, 00:29:01.253 { 00:29:01.253 "name": "BaseBdev4", 00:29:01.253 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:29:01.253 "is_configured": true, 00:29:01.253 "data_offset": 2048, 00:29:01.253 "data_size": 63488 00:29:01.253 } 00:29:01.253 ] 00:29:01.253 }' 00:29:01.253 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:01.253 15:22:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.523 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.523 15:22:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:01.800 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:29:01.800 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:29:02.059 [2024-07-23 15:22:57.415138] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:02.059 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:02.060 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.060 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:02.319 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.320 "name": "Existed_Raid", 00:29:02.320 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:29:02.320 "strip_size_kb": 64, 00:29:02.320 "state": "configuring", 00:29:02.320 "raid_level": "raid5f", 00:29:02.320 "superblock": true, 00:29:02.320 "num_base_bdevs": 4, 00:29:02.320 "num_base_bdevs_discovered": 2, 00:29:02.320 "num_base_bdevs_operational": 4, 00:29:02.320 "base_bdevs_list": [ 00:29:02.320 { 00:29:02.320 "name": "BaseBdev1", 00:29:02.320 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:02.320 "is_configured": true, 00:29:02.320 "data_offset": 2048, 00:29:02.320 "data_size": 63488 00:29:02.320 }, 00:29:02.320 { 00:29:02.320 "name": null, 00:29:02.320 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:29:02.320 "is_configured": false, 00:29:02.320 "data_offset": 2048, 00:29:02.320 "data_size": 63488 00:29:02.320 }, 00:29:02.320 { 00:29:02.320 "name": null, 00:29:02.320 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:29:02.320 "is_configured": false, 00:29:02.320 "data_offset": 2048, 00:29:02.320 "data_size": 63488 00:29:02.320 }, 00:29:02.320 { 00:29:02.320 "name": "BaseBdev4", 00:29:02.320 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:29:02.320 "is_configured": true, 00:29:02.320 "data_offset": 2048, 00:29:02.320 "data_size": 63488 00:29:02.320 } 00:29:02.320 ] 00:29:02.320 }' 00:29:02.320 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.320 15:22:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.580 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.580 15:22:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:02.839 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:29:02.839 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:03.099 [2024-07-23 15:22:58.427361] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.099 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:03.359 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.359 "name": "Existed_Raid", 00:29:03.359 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:29:03.359 "strip_size_kb": 64, 00:29:03.359 "state": "configuring", 00:29:03.359 "raid_level": "raid5f", 00:29:03.359 "superblock": true, 00:29:03.359 "num_base_bdevs": 4, 00:29:03.359 "num_base_bdevs_discovered": 3, 00:29:03.359 "num_base_bdevs_operational": 4, 00:29:03.359 "base_bdevs_list": [ 00:29:03.359 { 00:29:03.359 "name": "BaseBdev1", 00:29:03.359 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:03.359 "is_configured": true, 00:29:03.359 "data_offset": 2048, 00:29:03.359 "data_size": 63488 00:29:03.359 }, 00:29:03.359 { 00:29:03.359 "name": null, 00:29:03.359 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:29:03.359 "is_configured": false, 00:29:03.359 "data_offset": 2048, 00:29:03.359 "data_size": 63488 00:29:03.359 }, 00:29:03.359 { 00:29:03.359 "name": "BaseBdev3", 00:29:03.359 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:29:03.359 "is_configured": true, 00:29:03.359 "data_offset": 2048, 00:29:03.359 "data_size": 63488 00:29:03.359 }, 00:29:03.359 { 00:29:03.359 "name": "BaseBdev4", 00:29:03.359 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:29:03.359 "is_configured": true, 00:29:03.359 "data_offset": 2048, 00:29:03.359 "data_size": 63488 00:29:03.359 } 00:29:03.359 ] 00:29:03.359 }' 00:29:03.359 15:22:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.359 15:22:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:03.618 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.618 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:03.878 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:29:03.878 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:04.137 [2024-07-23 15:22:59.499681] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.137 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:04.396 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:04.396 "name": "Existed_Raid", 00:29:04.396 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:29:04.396 "strip_size_kb": 64, 00:29:04.396 "state": "configuring", 00:29:04.396 "raid_level": "raid5f", 00:29:04.396 "superblock": true, 00:29:04.396 "num_base_bdevs": 4, 00:29:04.396 "num_base_bdevs_discovered": 2, 00:29:04.396 "num_base_bdevs_operational": 4, 00:29:04.396 "base_bdevs_list": [ 00:29:04.396 { 00:29:04.396 "name": null, 00:29:04.397 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:04.397 "is_configured": false, 00:29:04.397 "data_offset": 2048, 00:29:04.397 "data_size": 63488 00:29:04.397 }, 00:29:04.397 { 00:29:04.397 "name": null, 00:29:04.397 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:29:04.397 "is_configured": false, 00:29:04.397 "data_offset": 2048, 00:29:04.397 "data_size": 63488 00:29:04.397 }, 00:29:04.397 { 00:29:04.397 "name": "BaseBdev3", 00:29:04.397 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:29:04.397 "is_configured": true, 00:29:04.397 "data_offset": 2048, 00:29:04.397 "data_size": 63488 00:29:04.397 }, 00:29:04.397 { 00:29:04.397 "name": "BaseBdev4", 00:29:04.397 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:29:04.397 "is_configured": true, 00:29:04.397 "data_offset": 2048, 00:29:04.397 "data_size": 63488 00:29:04.397 } 00:29:04.397 ] 00:29:04.397 }' 00:29:04.397 15:22:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:04.397 15:22:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.656 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:04.656 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.916 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:29:04.916 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:05.175 [2024-07-23 15:23:00.452565] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.175 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:05.435 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:05.435 "name": "Existed_Raid", 00:29:05.435 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:29:05.435 "strip_size_kb": 64, 00:29:05.435 "state": "configuring", 00:29:05.435 "raid_level": "raid5f", 00:29:05.435 "superblock": true, 00:29:05.435 "num_base_bdevs": 4, 00:29:05.435 "num_base_bdevs_discovered": 3, 00:29:05.435 "num_base_bdevs_operational": 4, 00:29:05.435 "base_bdevs_list": [ 00:29:05.435 { 00:29:05.435 "name": null, 00:29:05.435 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:05.435 "is_configured": false, 00:29:05.435 "data_offset": 2048, 00:29:05.435 "data_size": 63488 00:29:05.435 }, 00:29:05.435 { 00:29:05.435 "name": "BaseBdev2", 00:29:05.435 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:29:05.435 "is_configured": true, 00:29:05.435 "data_offset": 2048, 00:29:05.435 "data_size": 63488 00:29:05.435 }, 00:29:05.435 { 00:29:05.435 "name": "BaseBdev3", 00:29:05.435 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:29:05.435 "is_configured": true, 00:29:05.435 "data_offset": 2048, 00:29:05.435 "data_size": 63488 00:29:05.435 }, 00:29:05.435 { 00:29:05.435 "name": "BaseBdev4", 00:29:05.435 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:29:05.435 "is_configured": true, 00:29:05.435 "data_offset": 2048, 00:29:05.435 "data_size": 63488 00:29:05.435 } 00:29:05.435 ] 00:29:05.435 }' 00:29:05.435 15:23:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:05.435 15:23:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.695 15:23:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.695 15:23:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:05.954 15:23:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:29:05.954 15:23:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.954 15:23:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:06.214 15:23:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8449f235-1d85-447e-9220-50ae709f12e5 00:29:06.214 [2024-07-23 15:23:01.587975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:06.214 [2024-07-23 15:23:01.588174] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:29:06.214 [2024-07-23 15:23:01.588189] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:06.214 [2024-07-23 15:23:01.588263] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002600 00:29:06.214 [2024-07-23 15:23:01.588917] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:29:06.214 [2024-07-23 15:23:01.588945] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000008180 00:29:06.214 NewBaseBdev 00:29:06.214 [2024-07-23 15:23:01.589043] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.214 15:23:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:29:06.214 15:23:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:29:06.214 15:23:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:06.214 15:23:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:29:06.214 15:23:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:06.214 15:23:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:06.214 15:23:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:06.473 15:23:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:06.733 [ 00:29:06.733 { 00:29:06.733 "name": "NewBaseBdev", 00:29:06.733 "aliases": [ 00:29:06.733 "8449f235-1d85-447e-9220-50ae709f12e5" 00:29:06.733 ], 00:29:06.733 "product_name": "Malloc disk", 00:29:06.733 "block_size": 512, 00:29:06.733 "num_blocks": 65536, 00:29:06.733 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:06.733 "assigned_rate_limits": { 00:29:06.733 "rw_ios_per_sec": 0, 00:29:06.733 "rw_mbytes_per_sec": 0, 00:29:06.733 "r_mbytes_per_sec": 0, 00:29:06.733 "w_mbytes_per_sec": 0 00:29:06.733 }, 00:29:06.733 "claimed": true, 00:29:06.733 "claim_type": "exclusive_write", 00:29:06.733 "zoned": false, 00:29:06.733 "supported_io_types": { 00:29:06.733 "read": true, 00:29:06.733 "write": true, 00:29:06.733 "unmap": true, 00:29:06.733 "flush": true, 00:29:06.733 "reset": true, 00:29:06.733 "nvme_admin": false, 00:29:06.733 "nvme_io": false, 00:29:06.733 "nvme_io_md": false, 00:29:06.733 "write_zeroes": true, 00:29:06.733 "zcopy": true, 00:29:06.733 "get_zone_info": false, 00:29:06.733 "zone_management": false, 00:29:06.733 "zone_append": false, 00:29:06.733 "compare": false, 00:29:06.733 "compare_and_write": false, 00:29:06.733 "abort": true, 00:29:06.733 "seek_hole": false, 00:29:06.733 "seek_data": false, 00:29:06.733 "copy": true, 00:29:06.733 "nvme_iov_md": false 00:29:06.733 }, 00:29:06.733 "memory_domains": [ 00:29:06.733 { 00:29:06.733 "dma_device_id": "system", 00:29:06.733 "dma_device_type": 1 00:29:06.733 }, 00:29:06.733 { 00:29:06.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.733 "dma_device_type": 2 00:29:06.733 } 00:29:06.733 ], 00:29:06.733 "driver_specific": {} 00:29:06.733 } 00:29:06.733 ] 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.733 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:06.993 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.993 "name": "Existed_Raid", 00:29:06.993 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:29:06.993 "strip_size_kb": 64, 00:29:06.993 "state": "online", 00:29:06.993 "raid_level": "raid5f", 00:29:06.993 "superblock": true, 00:29:06.993 "num_base_bdevs": 4, 00:29:06.993 "num_base_bdevs_discovered": 4, 00:29:06.993 "num_base_bdevs_operational": 4, 00:29:06.993 "base_bdevs_list": [ 00:29:06.993 { 00:29:06.993 "name": "NewBaseBdev", 00:29:06.993 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:06.993 "is_configured": true, 00:29:06.993 "data_offset": 2048, 00:29:06.993 "data_size": 63488 00:29:06.993 }, 00:29:06.993 { 00:29:06.993 "name": "BaseBdev2", 00:29:06.993 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:29:06.993 "is_configured": true, 00:29:06.993 "data_offset": 2048, 00:29:06.993 "data_size": 63488 00:29:06.993 }, 00:29:06.993 { 00:29:06.993 "name": "BaseBdev3", 00:29:06.993 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:29:06.993 "is_configured": true, 00:29:06.993 "data_offset": 2048, 00:29:06.993 "data_size": 63488 00:29:06.993 }, 00:29:06.993 { 00:29:06.993 "name": "BaseBdev4", 00:29:06.993 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:29:06.993 "is_configured": true, 00:29:06.993 "data_offset": 2048, 00:29:06.993 "data_size": 63488 00:29:06.993 } 00:29:06.993 ] 00:29:06.993 }' 00:29:06.993 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.993 15:23:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.252 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:29:07.252 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:29:07.252 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:07.253 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:07.253 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:07.253 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:29:07.253 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:07.253 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:29:07.253 [2024-07-23 15:23:02.684566] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:07.512 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:07.512 "name": "Existed_Raid", 00:29:07.512 "aliases": [ 00:29:07.512 "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1" 00:29:07.512 ], 00:29:07.512 "product_name": "Raid Volume", 00:29:07.512 "block_size": 512, 00:29:07.512 "num_blocks": 190464, 00:29:07.512 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:29:07.512 "assigned_rate_limits": { 00:29:07.512 "rw_ios_per_sec": 0, 00:29:07.512 "rw_mbytes_per_sec": 0, 00:29:07.512 "r_mbytes_per_sec": 0, 00:29:07.512 "w_mbytes_per_sec": 0 00:29:07.512 }, 00:29:07.512 "claimed": false, 00:29:07.512 "zoned": false, 00:29:07.512 "supported_io_types": { 00:29:07.512 "read": true, 00:29:07.512 "write": true, 00:29:07.512 "unmap": false, 00:29:07.512 "flush": false, 00:29:07.512 "reset": true, 00:29:07.512 "nvme_admin": false, 00:29:07.512 "nvme_io": false, 00:29:07.512 "nvme_io_md": false, 00:29:07.512 "write_zeroes": true, 00:29:07.512 "zcopy": false, 00:29:07.512 "get_zone_info": false, 00:29:07.512 "zone_management": false, 00:29:07.512 "zone_append": false, 00:29:07.512 "compare": false, 00:29:07.512 "compare_and_write": false, 00:29:07.512 "abort": false, 00:29:07.512 "seek_hole": false, 00:29:07.512 "seek_data": false, 00:29:07.512 "copy": false, 00:29:07.512 "nvme_iov_md": false 00:29:07.512 }, 00:29:07.512 "driver_specific": { 00:29:07.512 "raid": { 00:29:07.512 "uuid": "2afc5c3e-cd84-4f4a-aa0c-1d7710395fb1", 00:29:07.512 "strip_size_kb": 64, 00:29:07.512 "state": "online", 00:29:07.512 "raid_level": "raid5f", 00:29:07.512 "superblock": true, 00:29:07.512 "num_base_bdevs": 4, 00:29:07.512 "num_base_bdevs_discovered": 4, 00:29:07.512 "num_base_bdevs_operational": 4, 00:29:07.512 "base_bdevs_list": [ 00:29:07.512 { 00:29:07.512 "name": "NewBaseBdev", 00:29:07.512 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:07.512 "is_configured": true, 00:29:07.512 "data_offset": 2048, 00:29:07.512 "data_size": 63488 00:29:07.512 }, 00:29:07.512 { 00:29:07.512 "name": "BaseBdev2", 00:29:07.512 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:29:07.512 "is_configured": true, 00:29:07.512 "data_offset": 2048, 00:29:07.512 "data_size": 63488 00:29:07.512 }, 00:29:07.512 { 00:29:07.512 "name": "BaseBdev3", 00:29:07.512 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:29:07.512 "is_configured": true, 00:29:07.512 "data_offset": 2048, 00:29:07.512 "data_size": 63488 00:29:07.512 }, 00:29:07.512 { 00:29:07.512 "name": "BaseBdev4", 00:29:07.512 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:29:07.512 "is_configured": true, 00:29:07.512 "data_offset": 2048, 00:29:07.512 "data_size": 63488 00:29:07.512 } 00:29:07.512 ] 00:29:07.512 } 00:29:07.512 } 00:29:07.512 }' 00:29:07.512 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:07.512 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:29:07.512 BaseBdev2 00:29:07.512 BaseBdev3 00:29:07.512 BaseBdev4' 00:29:07.512 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:07.512 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:29:07.512 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:07.512 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:07.512 "name": "NewBaseBdev", 00:29:07.512 "aliases": [ 00:29:07.512 "8449f235-1d85-447e-9220-50ae709f12e5" 00:29:07.512 ], 00:29:07.512 "product_name": "Malloc disk", 00:29:07.512 "block_size": 512, 00:29:07.513 "num_blocks": 65536, 00:29:07.513 "uuid": "8449f235-1d85-447e-9220-50ae709f12e5", 00:29:07.513 "assigned_rate_limits": { 00:29:07.513 "rw_ios_per_sec": 0, 00:29:07.513 "rw_mbytes_per_sec": 0, 00:29:07.513 "r_mbytes_per_sec": 0, 00:29:07.513 "w_mbytes_per_sec": 0 00:29:07.513 }, 00:29:07.513 "claimed": true, 00:29:07.513 "claim_type": "exclusive_write", 00:29:07.513 "zoned": false, 00:29:07.513 "supported_io_types": { 00:29:07.513 "read": true, 00:29:07.513 "write": true, 00:29:07.513 "unmap": true, 00:29:07.513 "flush": true, 00:29:07.513 "reset": true, 00:29:07.513 "nvme_admin": false, 00:29:07.513 "nvme_io": false, 00:29:07.513 "nvme_io_md": false, 00:29:07.513 "write_zeroes": true, 00:29:07.513 "zcopy": true, 00:29:07.513 "get_zone_info": false, 00:29:07.513 "zone_management": false, 00:29:07.513 "zone_append": false, 00:29:07.513 "compare": false, 00:29:07.513 "compare_and_write": false, 00:29:07.513 "abort": true, 00:29:07.513 "seek_hole": false, 00:29:07.513 "seek_data": false, 00:29:07.513 "copy": true, 00:29:07.513 "nvme_iov_md": false 00:29:07.513 }, 00:29:07.513 "memory_domains": [ 00:29:07.513 { 00:29:07.513 "dma_device_id": "system", 00:29:07.513 "dma_device_type": 1 00:29:07.513 }, 00:29:07.513 { 00:29:07.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.513 "dma_device_type": 2 00:29:07.513 } 00:29:07.513 ], 00:29:07.513 "driver_specific": {} 00:29:07.513 }' 00:29:07.513 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.513 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.513 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:07.513 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.513 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.513 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:07.513 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.772 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.772 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:07.772 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.772 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.772 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:07.772 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:07.772 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:29:07.772 15:23:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:07.772 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:07.772 "name": "BaseBdev2", 00:29:07.772 "aliases": [ 00:29:07.772 "843e0067-16c9-48f2-b1f8-7eb5ebed113f" 00:29:07.772 ], 00:29:07.772 "product_name": "Malloc disk", 00:29:07.772 "block_size": 512, 00:29:07.772 "num_blocks": 65536, 00:29:07.772 "uuid": "843e0067-16c9-48f2-b1f8-7eb5ebed113f", 00:29:07.772 "assigned_rate_limits": { 00:29:07.772 "rw_ios_per_sec": 0, 00:29:07.772 "rw_mbytes_per_sec": 0, 00:29:07.772 "r_mbytes_per_sec": 0, 00:29:07.772 "w_mbytes_per_sec": 0 00:29:07.772 }, 00:29:07.772 "claimed": true, 00:29:07.772 "claim_type": "exclusive_write", 00:29:07.772 "zoned": false, 00:29:07.772 "supported_io_types": { 00:29:07.772 "read": true, 00:29:07.772 "write": true, 00:29:07.772 "unmap": true, 00:29:07.772 "flush": true, 00:29:07.772 "reset": true, 00:29:07.772 "nvme_admin": false, 00:29:07.772 "nvme_io": false, 00:29:07.772 "nvme_io_md": false, 00:29:07.772 "write_zeroes": true, 00:29:07.772 "zcopy": true, 00:29:07.772 "get_zone_info": false, 00:29:07.772 "zone_management": false, 00:29:07.772 "zone_append": false, 00:29:07.772 "compare": false, 00:29:07.772 "compare_and_write": false, 00:29:07.772 "abort": true, 00:29:07.772 "seek_hole": false, 00:29:07.772 "seek_data": false, 00:29:07.772 "copy": true, 00:29:07.772 "nvme_iov_md": false 00:29:07.772 }, 00:29:07.772 "memory_domains": [ 00:29:07.772 { 00:29:07.772 "dma_device_id": "system", 00:29:07.772 "dma_device_type": 1 00:29:07.772 }, 00:29:07.772 { 00:29:07.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.772 "dma_device_type": 2 00:29:07.772 } 00:29:07.772 ], 00:29:07.772 "driver_specific": {} 00:29:07.772 }' 00:29:07.772 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.772 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.772 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:07.772 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.772 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:29:08.031 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:08.290 "name": "BaseBdev3", 00:29:08.290 "aliases": [ 00:29:08.290 "164295b7-92aa-4742-8070-0f53f3d39ff5" 00:29:08.290 ], 00:29:08.290 "product_name": "Malloc disk", 00:29:08.290 "block_size": 512, 00:29:08.290 "num_blocks": 65536, 00:29:08.290 "uuid": "164295b7-92aa-4742-8070-0f53f3d39ff5", 00:29:08.290 "assigned_rate_limits": { 00:29:08.290 "rw_ios_per_sec": 0, 00:29:08.290 "rw_mbytes_per_sec": 0, 00:29:08.290 "r_mbytes_per_sec": 0, 00:29:08.290 "w_mbytes_per_sec": 0 00:29:08.290 }, 00:29:08.290 "claimed": true, 00:29:08.290 "claim_type": "exclusive_write", 00:29:08.290 "zoned": false, 00:29:08.290 "supported_io_types": { 00:29:08.290 "read": true, 00:29:08.290 "write": true, 00:29:08.290 "unmap": true, 00:29:08.290 "flush": true, 00:29:08.290 "reset": true, 00:29:08.290 "nvme_admin": false, 00:29:08.290 "nvme_io": false, 00:29:08.290 "nvme_io_md": false, 00:29:08.290 "write_zeroes": true, 00:29:08.290 "zcopy": true, 00:29:08.290 "get_zone_info": false, 00:29:08.290 "zone_management": false, 00:29:08.290 "zone_append": false, 00:29:08.290 "compare": false, 00:29:08.290 "compare_and_write": false, 00:29:08.290 "abort": true, 00:29:08.290 "seek_hole": false, 00:29:08.290 "seek_data": false, 00:29:08.290 "copy": true, 00:29:08.290 "nvme_iov_md": false 00:29:08.290 }, 00:29:08.290 "memory_domains": [ 00:29:08.290 { 00:29:08.290 "dma_device_id": "system", 00:29:08.290 "dma_device_type": 1 00:29:08.290 }, 00:29:08.290 { 00:29:08.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.290 "dma_device_type": 2 00:29:08.290 } 00:29:08.290 ], 00:29:08.290 "driver_specific": {} 00:29:08.290 }' 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.290 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:08.291 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:08.291 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:29:08.291 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:08.548 "name": "BaseBdev4", 00:29:08.548 "aliases": [ 00:29:08.548 "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67" 00:29:08.548 ], 00:29:08.548 "product_name": "Malloc disk", 00:29:08.548 "block_size": 512, 00:29:08.548 "num_blocks": 65536, 00:29:08.548 "uuid": "05efe9b4-2761-4dc4-b0a5-3b87c7c24a67", 00:29:08.548 "assigned_rate_limits": { 00:29:08.548 "rw_ios_per_sec": 0, 00:29:08.548 "rw_mbytes_per_sec": 0, 00:29:08.548 "r_mbytes_per_sec": 0, 00:29:08.548 "w_mbytes_per_sec": 0 00:29:08.548 }, 00:29:08.548 "claimed": true, 00:29:08.548 "claim_type": "exclusive_write", 00:29:08.548 "zoned": false, 00:29:08.548 "supported_io_types": { 00:29:08.548 "read": true, 00:29:08.548 "write": true, 00:29:08.548 "unmap": true, 00:29:08.548 "flush": true, 00:29:08.548 "reset": true, 00:29:08.548 "nvme_admin": false, 00:29:08.548 "nvme_io": false, 00:29:08.548 "nvme_io_md": false, 00:29:08.548 "write_zeroes": true, 00:29:08.548 "zcopy": true, 00:29:08.548 "get_zone_info": false, 00:29:08.548 "zone_management": false, 00:29:08.548 "zone_append": false, 00:29:08.548 "compare": false, 00:29:08.548 "compare_and_write": false, 00:29:08.548 "abort": true, 00:29:08.548 "seek_hole": false, 00:29:08.548 "seek_data": false, 00:29:08.548 "copy": true, 00:29:08.548 "nvme_iov_md": false 00:29:08.548 }, 00:29:08.548 "memory_domains": [ 00:29:08.548 { 00:29:08.548 "dma_device_id": "system", 00:29:08.548 "dma_device_type": 1 00:29:08.548 }, 00:29:08.548 { 00:29:08.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.548 "dma_device_type": 2 00:29:08.548 } 00:29:08.548 ], 00:29:08.548 "driver_specific": {} 00:29:08.548 }' 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:08.548 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.549 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.805 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:08.805 15:23:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:08.805 [2024-07-23 15:23:04.224710] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:08.805 [2024-07-23 15:23:04.224758] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:08.805 [2024-07-23 15:23:04.224865] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:08.805 [2024-07-23 15:23:04.225137] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:08.805 [2024-07-23 15:23:04.225156] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name Existed_Raid, state offline 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 117106 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 117106 ']' 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 117106 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117106 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:09.062 killing process with pid 117106 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117106' 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 117106 00:29:09.062 [2024-07-23 15:23:04.288077] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:09.062 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 117106 00:29:09.062 [2024-07-23 15:23:04.334326] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:09.320 15:23:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:29:09.320 00:29:09.320 real 0m23.523s 00:29:09.320 user 0m41.052s 00:29:09.320 sys 0m5.160s 00:29:09.320 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:09.320 ************************************ 00:29:09.320 END TEST raid5f_state_function_test_sb 00:29:09.320 15:23:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.320 ************************************ 00:29:09.320 15:23:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:09.320 15:23:04 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:29:09.320 15:23:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:29:09.320 15:23:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.320 15:23:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:09.320 ************************************ 00:29:09.320 START TEST raid5f_superblock_test 00:29:09.320 ************************************ 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 4 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=118054 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 118054 /var/tmp/spdk-raid.sock 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 118054 ']' 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:09.320 15:23:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:09.321 15:23:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:09.321 15:23:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.321 15:23:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.321 [2024-07-23 15:23:04.715953] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:29:09.321 [2024-07-23 15:23:04.716156] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118054 ] 00:29:09.578 [2024-07-23 15:23:04.874245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.578 [2024-07-23 15:23:04.929054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.578 [2024-07-23 15:23:04.982305] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:29:10.514 malloc1 00:29:10.514 15:23:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:10.773 [2024-07-23 15:23:06.012818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:10.773 [2024-07-23 15:23:06.012918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:10.773 [2024-07-23 15:23:06.012945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:29:10.773 [2024-07-23 15:23:06.012961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:10.773 [2024-07-23 15:23:06.015520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:10.773 [2024-07-23 15:23:06.015573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:10.773 pt1 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:10.773 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:29:10.774 malloc2 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:11.038 [2024-07-23 15:23:06.378517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:11.038 [2024-07-23 15:23:06.378605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.038 [2024-07-23 15:23:06.378630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:29:11.038 [2024-07-23 15:23:06.378647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.038 [2024-07-23 15:23:06.381234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.038 [2024-07-23 15:23:06.381280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:11.038 pt2 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:11.038 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:29:11.296 malloc3 00:29:11.296 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:11.555 [2024-07-23 15:23:06.748200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:11.555 [2024-07-23 15:23:06.748291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.555 [2024-07-23 15:23:06.748317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:29:11.555 [2024-07-23 15:23:06.748333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.555 [2024-07-23 15:23:06.750855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.555 [2024-07-23 15:23:06.750900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:11.555 pt3 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:29:11.555 malloc4 00:29:11.555 15:23:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:11.814 [2024-07-23 15:23:07.093841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:11.814 [2024-07-23 15:23:07.093930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.814 [2024-07-23 15:23:07.093958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:29:11.814 [2024-07-23 15:23:07.093973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.814 [2024-07-23 15:23:07.096628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.814 [2024-07-23 15:23:07.096678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:11.814 pt4 00:29:11.814 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:29:11.814 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:29:11.814 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:29:12.073 [2024-07-23 15:23:07.265947] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:12.073 [2024-07-23 15:23:07.268353] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:12.073 [2024-07-23 15:23:07.268420] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:12.073 [2024-07-23 15:23:07.268476] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:12.073 [2024-07-23 15:23:07.268679] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008480 00:29:12.073 [2024-07-23 15:23:07.268703] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:12.073 [2024-07-23 15:23:07.268830] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:29:12.073 [2024-07-23 15:23:07.269549] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008480 00:29:12.073 [2024-07-23 15:23:07.269571] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008480 00:29:12.073 [2024-07-23 15:23:07.269704] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:12.073 "name": "raid_bdev1", 00:29:12.073 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:12.073 "strip_size_kb": 64, 00:29:12.073 "state": "online", 00:29:12.073 "raid_level": "raid5f", 00:29:12.073 "superblock": true, 00:29:12.073 "num_base_bdevs": 4, 00:29:12.073 "num_base_bdevs_discovered": 4, 00:29:12.073 "num_base_bdevs_operational": 4, 00:29:12.073 "base_bdevs_list": [ 00:29:12.073 { 00:29:12.073 "name": "pt1", 00:29:12.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:12.073 "is_configured": true, 00:29:12.073 "data_offset": 2048, 00:29:12.073 "data_size": 63488 00:29:12.073 }, 00:29:12.073 { 00:29:12.073 "name": "pt2", 00:29:12.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:12.073 "is_configured": true, 00:29:12.073 "data_offset": 2048, 00:29:12.073 "data_size": 63488 00:29:12.073 }, 00:29:12.073 { 00:29:12.073 "name": "pt3", 00:29:12.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:12.073 "is_configured": true, 00:29:12.073 "data_offset": 2048, 00:29:12.073 "data_size": 63488 00:29:12.073 }, 00:29:12.073 { 00:29:12.073 "name": "pt4", 00:29:12.073 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:12.073 "is_configured": true, 00:29:12.073 "data_offset": 2048, 00:29:12.073 "data_size": 63488 00:29:12.073 } 00:29:12.073 ] 00:29:12.073 }' 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:12.073 15:23:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.332 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:29:12.332 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:12.332 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:12.332 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:12.332 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:12.332 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:29:12.332 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:12.332 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:12.591 [2024-07-23 15:23:07.978193] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:12.591 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:12.591 "name": "raid_bdev1", 00:29:12.591 "aliases": [ 00:29:12.591 "cde3bca2-5e18-47be-a50a-4d627b518789" 00:29:12.591 ], 00:29:12.591 "product_name": "Raid Volume", 00:29:12.591 "block_size": 512, 00:29:12.591 "num_blocks": 190464, 00:29:12.591 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:12.591 "assigned_rate_limits": { 00:29:12.591 "rw_ios_per_sec": 0, 00:29:12.591 "rw_mbytes_per_sec": 0, 00:29:12.591 "r_mbytes_per_sec": 0, 00:29:12.591 "w_mbytes_per_sec": 0 00:29:12.591 }, 00:29:12.591 "claimed": false, 00:29:12.591 "zoned": false, 00:29:12.591 "supported_io_types": { 00:29:12.591 "read": true, 00:29:12.591 "write": true, 00:29:12.591 "unmap": false, 00:29:12.591 "flush": false, 00:29:12.591 "reset": true, 00:29:12.591 "nvme_admin": false, 00:29:12.591 "nvme_io": false, 00:29:12.591 "nvme_io_md": false, 00:29:12.591 "write_zeroes": true, 00:29:12.591 "zcopy": false, 00:29:12.591 "get_zone_info": false, 00:29:12.591 "zone_management": false, 00:29:12.591 "zone_append": false, 00:29:12.591 "compare": false, 00:29:12.591 "compare_and_write": false, 00:29:12.591 "abort": false, 00:29:12.591 "seek_hole": false, 00:29:12.591 "seek_data": false, 00:29:12.591 "copy": false, 00:29:12.591 "nvme_iov_md": false 00:29:12.591 }, 00:29:12.591 "driver_specific": { 00:29:12.591 "raid": { 00:29:12.591 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:12.591 "strip_size_kb": 64, 00:29:12.591 "state": "online", 00:29:12.591 "raid_level": "raid5f", 00:29:12.591 "superblock": true, 00:29:12.591 "num_base_bdevs": 4, 00:29:12.591 "num_base_bdevs_discovered": 4, 00:29:12.591 "num_base_bdevs_operational": 4, 00:29:12.591 "base_bdevs_list": [ 00:29:12.591 { 00:29:12.591 "name": "pt1", 00:29:12.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:12.591 "is_configured": true, 00:29:12.591 "data_offset": 2048, 00:29:12.591 "data_size": 63488 00:29:12.591 }, 00:29:12.591 { 00:29:12.591 "name": "pt2", 00:29:12.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:12.591 "is_configured": true, 00:29:12.591 "data_offset": 2048, 00:29:12.591 "data_size": 63488 00:29:12.591 }, 00:29:12.591 { 00:29:12.591 "name": "pt3", 00:29:12.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:12.591 "is_configured": true, 00:29:12.591 "data_offset": 2048, 00:29:12.591 "data_size": 63488 00:29:12.591 }, 00:29:12.591 { 00:29:12.591 "name": "pt4", 00:29:12.591 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:12.591 "is_configured": true, 00:29:12.591 "data_offset": 2048, 00:29:12.591 "data_size": 63488 00:29:12.591 } 00:29:12.591 ] 00:29:12.591 } 00:29:12.591 } 00:29:12.591 }' 00:29:12.591 15:23:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:12.591 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:12.591 pt2 00:29:12.591 pt3 00:29:12.591 pt4' 00:29:12.591 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:12.591 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:12.591 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:12.851 "name": "pt1", 00:29:12.851 "aliases": [ 00:29:12.851 "00000000-0000-0000-0000-000000000001" 00:29:12.851 ], 00:29:12.851 "product_name": "passthru", 00:29:12.851 "block_size": 512, 00:29:12.851 "num_blocks": 65536, 00:29:12.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:12.851 "assigned_rate_limits": { 00:29:12.851 "rw_ios_per_sec": 0, 00:29:12.851 "rw_mbytes_per_sec": 0, 00:29:12.851 "r_mbytes_per_sec": 0, 00:29:12.851 "w_mbytes_per_sec": 0 00:29:12.851 }, 00:29:12.851 "claimed": true, 00:29:12.851 "claim_type": "exclusive_write", 00:29:12.851 "zoned": false, 00:29:12.851 "supported_io_types": { 00:29:12.851 "read": true, 00:29:12.851 "write": true, 00:29:12.851 "unmap": true, 00:29:12.851 "flush": true, 00:29:12.851 "reset": true, 00:29:12.851 "nvme_admin": false, 00:29:12.851 "nvme_io": false, 00:29:12.851 "nvme_io_md": false, 00:29:12.851 "write_zeroes": true, 00:29:12.851 "zcopy": true, 00:29:12.851 "get_zone_info": false, 00:29:12.851 "zone_management": false, 00:29:12.851 "zone_append": false, 00:29:12.851 "compare": false, 00:29:12.851 "compare_and_write": false, 00:29:12.851 "abort": true, 00:29:12.851 "seek_hole": false, 00:29:12.851 "seek_data": false, 00:29:12.851 "copy": true, 00:29:12.851 "nvme_iov_md": false 00:29:12.851 }, 00:29:12.851 "memory_domains": [ 00:29:12.851 { 00:29:12.851 "dma_device_id": "system", 00:29:12.851 "dma_device_type": 1 00:29:12.851 }, 00:29:12.851 { 00:29:12.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.851 "dma_device_type": 2 00:29:12.851 } 00:29:12.851 ], 00:29:12.851 "driver_specific": { 00:29:12.851 "passthru": { 00:29:12.851 "name": "pt1", 00:29:12.851 "base_bdev_name": "malloc1" 00:29:12.851 } 00:29:12.851 } 00:29:12.851 }' 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:12.851 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.110 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:13.110 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:13.110 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:13.110 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:13.369 "name": "pt2", 00:29:13.369 "aliases": [ 00:29:13.369 "00000000-0000-0000-0000-000000000002" 00:29:13.369 ], 00:29:13.369 "product_name": "passthru", 00:29:13.369 "block_size": 512, 00:29:13.369 "num_blocks": 65536, 00:29:13.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:13.369 "assigned_rate_limits": { 00:29:13.369 "rw_ios_per_sec": 0, 00:29:13.369 "rw_mbytes_per_sec": 0, 00:29:13.369 "r_mbytes_per_sec": 0, 00:29:13.369 "w_mbytes_per_sec": 0 00:29:13.369 }, 00:29:13.369 "claimed": true, 00:29:13.369 "claim_type": "exclusive_write", 00:29:13.369 "zoned": false, 00:29:13.369 "supported_io_types": { 00:29:13.369 "read": true, 00:29:13.369 "write": true, 00:29:13.369 "unmap": true, 00:29:13.369 "flush": true, 00:29:13.369 "reset": true, 00:29:13.369 "nvme_admin": false, 00:29:13.369 "nvme_io": false, 00:29:13.369 "nvme_io_md": false, 00:29:13.369 "write_zeroes": true, 00:29:13.369 "zcopy": true, 00:29:13.369 "get_zone_info": false, 00:29:13.369 "zone_management": false, 00:29:13.369 "zone_append": false, 00:29:13.369 "compare": false, 00:29:13.369 "compare_and_write": false, 00:29:13.369 "abort": true, 00:29:13.369 "seek_hole": false, 00:29:13.369 "seek_data": false, 00:29:13.369 "copy": true, 00:29:13.369 "nvme_iov_md": false 00:29:13.369 }, 00:29:13.369 "memory_domains": [ 00:29:13.369 { 00:29:13.369 "dma_device_id": "system", 00:29:13.369 "dma_device_type": 1 00:29:13.369 }, 00:29:13.369 { 00:29:13.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:13.369 "dma_device_type": 2 00:29:13.369 } 00:29:13.369 ], 00:29:13.369 "driver_specific": { 00:29:13.369 "passthru": { 00:29:13.369 "name": "pt2", 00:29:13.369 "base_bdev_name": "malloc2" 00:29:13.369 } 00:29:13.369 } 00:29:13.369 }' 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:13.369 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:29:13.628 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:13.628 "name": "pt3", 00:29:13.628 "aliases": [ 00:29:13.628 "00000000-0000-0000-0000-000000000003" 00:29:13.628 ], 00:29:13.628 "product_name": "passthru", 00:29:13.628 "block_size": 512, 00:29:13.628 "num_blocks": 65536, 00:29:13.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:13.628 "assigned_rate_limits": { 00:29:13.628 "rw_ios_per_sec": 0, 00:29:13.628 "rw_mbytes_per_sec": 0, 00:29:13.628 "r_mbytes_per_sec": 0, 00:29:13.628 "w_mbytes_per_sec": 0 00:29:13.628 }, 00:29:13.628 "claimed": true, 00:29:13.628 "claim_type": "exclusive_write", 00:29:13.628 "zoned": false, 00:29:13.628 "supported_io_types": { 00:29:13.628 "read": true, 00:29:13.628 "write": true, 00:29:13.628 "unmap": true, 00:29:13.628 "flush": true, 00:29:13.628 "reset": true, 00:29:13.628 "nvme_admin": false, 00:29:13.628 "nvme_io": false, 00:29:13.628 "nvme_io_md": false, 00:29:13.628 "write_zeroes": true, 00:29:13.628 "zcopy": true, 00:29:13.628 "get_zone_info": false, 00:29:13.628 "zone_management": false, 00:29:13.628 "zone_append": false, 00:29:13.628 "compare": false, 00:29:13.628 "compare_and_write": false, 00:29:13.628 "abort": true, 00:29:13.628 "seek_hole": false, 00:29:13.628 "seek_data": false, 00:29:13.628 "copy": true, 00:29:13.628 "nvme_iov_md": false 00:29:13.628 }, 00:29:13.628 "memory_domains": [ 00:29:13.628 { 00:29:13.628 "dma_device_id": "system", 00:29:13.628 "dma_device_type": 1 00:29:13.628 }, 00:29:13.628 { 00:29:13.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:13.628 "dma_device_type": 2 00:29:13.628 } 00:29:13.628 ], 00:29:13.628 "driver_specific": { 00:29:13.628 "passthru": { 00:29:13.628 "name": "pt3", 00:29:13.628 "base_bdev_name": "malloc3" 00:29:13.628 } 00:29:13.628 } 00:29:13.628 }' 00:29:13.628 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.628 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.628 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:13.629 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.629 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.629 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:13.629 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.629 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.629 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:13.629 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.629 15:23:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.629 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:13.629 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:13.629 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:29:13.629 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:13.888 "name": "pt4", 00:29:13.888 "aliases": [ 00:29:13.888 "00000000-0000-0000-0000-000000000004" 00:29:13.888 ], 00:29:13.888 "product_name": "passthru", 00:29:13.888 "block_size": 512, 00:29:13.888 "num_blocks": 65536, 00:29:13.888 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:13.888 "assigned_rate_limits": { 00:29:13.888 "rw_ios_per_sec": 0, 00:29:13.888 "rw_mbytes_per_sec": 0, 00:29:13.888 "r_mbytes_per_sec": 0, 00:29:13.888 "w_mbytes_per_sec": 0 00:29:13.888 }, 00:29:13.888 "claimed": true, 00:29:13.888 "claim_type": "exclusive_write", 00:29:13.888 "zoned": false, 00:29:13.888 "supported_io_types": { 00:29:13.888 "read": true, 00:29:13.888 "write": true, 00:29:13.888 "unmap": true, 00:29:13.888 "flush": true, 00:29:13.888 "reset": true, 00:29:13.888 "nvme_admin": false, 00:29:13.888 "nvme_io": false, 00:29:13.888 "nvme_io_md": false, 00:29:13.888 "write_zeroes": true, 00:29:13.888 "zcopy": true, 00:29:13.888 "get_zone_info": false, 00:29:13.888 "zone_management": false, 00:29:13.888 "zone_append": false, 00:29:13.888 "compare": false, 00:29:13.888 "compare_and_write": false, 00:29:13.888 "abort": true, 00:29:13.888 "seek_hole": false, 00:29:13.888 "seek_data": false, 00:29:13.888 "copy": true, 00:29:13.888 "nvme_iov_md": false 00:29:13.888 }, 00:29:13.888 "memory_domains": [ 00:29:13.888 { 00:29:13.888 "dma_device_id": "system", 00:29:13.888 "dma_device_type": 1 00:29:13.888 }, 00:29:13.888 { 00:29:13.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:13.888 "dma_device_type": 2 00:29:13.888 } 00:29:13.888 ], 00:29:13.888 "driver_specific": { 00:29:13.888 "passthru": { 00:29:13.888 "name": "pt4", 00:29:13.888 "base_bdev_name": "malloc4" 00:29:13.888 } 00:29:13.888 } 00:29:13.888 }' 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:13.888 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:29:14.147 [2024-07-23 15:23:09.514566] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:14.147 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=cde3bca2-5e18-47be-a50a-4d627b518789 00:29:14.147 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z cde3bca2-5e18-47be-a50a-4d627b518789 ']' 00:29:14.147 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:14.405 [2024-07-23 15:23:09.690385] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:14.405 [2024-07-23 15:23:09.690430] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:14.405 [2024-07-23 15:23:09.690542] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:14.405 [2024-07-23 15:23:09.690647] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:14.405 [2024-07-23 15:23:09.690676] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008480 name raid_bdev1, state offline 00:29:14.405 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.405 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:29:14.664 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:29:14.664 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:29:14.664 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:14.664 15:23:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:14.923 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:14.923 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:14.923 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:14.923 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:15.181 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:15.181 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:15.439 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:15.439 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:15.698 15:23:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:15.957 [2024-07-23 15:23:11.154699] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:15.957 [2024-07-23 15:23:11.157099] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:15.957 [2024-07-23 15:23:11.157158] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:15.957 [2024-07-23 15:23:11.157190] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:29:15.957 [2024-07-23 15:23:11.157241] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:15.957 [2024-07-23 15:23:11.157300] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:15.957 [2024-07-23 15:23:11.157327] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:29:15.957 [2024-07-23 15:23:11.157346] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:29:15.957 [2024-07-23 15:23:11.157365] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:15.957 [2024-07-23 15:23:11.157385] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state configuring 00:29:15.957 request: 00:29:15.957 { 00:29:15.957 "name": "raid_bdev1", 00:29:15.957 "raid_level": "raid5f", 00:29:15.957 "base_bdevs": [ 00:29:15.957 "malloc1", 00:29:15.957 "malloc2", 00:29:15.957 "malloc3", 00:29:15.957 "malloc4" 00:29:15.957 ], 00:29:15.957 "strip_size_kb": 64, 00:29:15.957 "superblock": false, 00:29:15.957 "method": "bdev_raid_create", 00:29:15.957 "req_id": 1 00:29:15.957 } 00:29:15.957 Got JSON-RPC error response 00:29:15.957 response: 00:29:15.957 { 00:29:15.957 "code": -17, 00:29:15.957 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:15.957 } 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:29:15.957 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:16.217 [2024-07-23 15:23:11.510709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:16.217 [2024-07-23 15:23:11.510806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.217 [2024-07-23 15:23:11.510834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:29:16.217 [2024-07-23 15:23:11.510846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.217 [2024-07-23 15:23:11.513291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.217 [2024-07-23 15:23:11.513333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:16.217 [2024-07-23 15:23:11.513414] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:16.217 [2024-07-23 15:23:11.513472] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:16.217 pt1 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.217 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.476 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.476 "name": "raid_bdev1", 00:29:16.476 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:16.476 "strip_size_kb": 64, 00:29:16.476 "state": "configuring", 00:29:16.476 "raid_level": "raid5f", 00:29:16.476 "superblock": true, 00:29:16.476 "num_base_bdevs": 4, 00:29:16.476 "num_base_bdevs_discovered": 1, 00:29:16.476 "num_base_bdevs_operational": 4, 00:29:16.476 "base_bdevs_list": [ 00:29:16.476 { 00:29:16.476 "name": "pt1", 00:29:16.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:16.476 "is_configured": true, 00:29:16.476 "data_offset": 2048, 00:29:16.476 "data_size": 63488 00:29:16.476 }, 00:29:16.476 { 00:29:16.476 "name": null, 00:29:16.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:16.476 "is_configured": false, 00:29:16.476 "data_offset": 2048, 00:29:16.476 "data_size": 63488 00:29:16.476 }, 00:29:16.476 { 00:29:16.476 "name": null, 00:29:16.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:16.476 "is_configured": false, 00:29:16.476 "data_offset": 2048, 00:29:16.476 "data_size": 63488 00:29:16.476 }, 00:29:16.476 { 00:29:16.476 "name": null, 00:29:16.476 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:16.476 "is_configured": false, 00:29:16.476 "data_offset": 2048, 00:29:16.476 "data_size": 63488 00:29:16.476 } 00:29:16.476 ] 00:29:16.476 }' 00:29:16.476 15:23:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.476 15:23:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.734 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:29:16.734 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:16.993 [2024-07-23 15:23:12.182862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:16.993 [2024-07-23 15:23:12.182936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.993 [2024-07-23 15:23:12.182963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:29:16.993 [2024-07-23 15:23:12.182976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.993 [2024-07-23 15:23:12.183392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.993 [2024-07-23 15:23:12.183411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:16.993 [2024-07-23 15:23:12.183487] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:16.993 [2024-07-23 15:23:12.183510] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:16.993 pt2 00:29:16.993 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:17.251 [2024-07-23 15:23:12.442939] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.251 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.510 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:17.510 "name": "raid_bdev1", 00:29:17.510 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:17.510 "strip_size_kb": 64, 00:29:17.510 "state": "configuring", 00:29:17.510 "raid_level": "raid5f", 00:29:17.510 "superblock": true, 00:29:17.510 "num_base_bdevs": 4, 00:29:17.510 "num_base_bdevs_discovered": 1, 00:29:17.510 "num_base_bdevs_operational": 4, 00:29:17.510 "base_bdevs_list": [ 00:29:17.510 { 00:29:17.510 "name": "pt1", 00:29:17.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:17.510 "is_configured": true, 00:29:17.510 "data_offset": 2048, 00:29:17.510 "data_size": 63488 00:29:17.510 }, 00:29:17.510 { 00:29:17.510 "name": null, 00:29:17.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:17.510 "is_configured": false, 00:29:17.510 "data_offset": 2048, 00:29:17.510 "data_size": 63488 00:29:17.510 }, 00:29:17.510 { 00:29:17.510 "name": null, 00:29:17.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:17.510 "is_configured": false, 00:29:17.510 "data_offset": 2048, 00:29:17.510 "data_size": 63488 00:29:17.510 }, 00:29:17.510 { 00:29:17.510 "name": null, 00:29:17.510 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:17.510 "is_configured": false, 00:29:17.510 "data_offset": 2048, 00:29:17.510 "data_size": 63488 00:29:17.510 } 00:29:17.510 ] 00:29:17.510 }' 00:29:17.510 15:23:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:17.510 15:23:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.769 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:29:17.769 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:17.769 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:17.769 [2024-07-23 15:23:13.187077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:17.769 [2024-07-23 15:23:13.187159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.769 [2024-07-23 15:23:13.187188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:29:17.769 [2024-07-23 15:23:13.187208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.769 [2024-07-23 15:23:13.187632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.769 [2024-07-23 15:23:13.187657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:17.769 [2024-07-23 15:23:13.187727] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:17.769 [2024-07-23 15:23:13.187752] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:17.769 pt2 00:29:18.029 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:18.029 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:18.029 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:18.029 [2024-07-23 15:23:13.451134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:18.029 [2024-07-23 15:23:13.451378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:18.029 [2024-07-23 15:23:13.451412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:29:18.029 [2024-07-23 15:23:13.451427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:18.029 [2024-07-23 15:23:13.451886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:18.029 [2024-07-23 15:23:13.451911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:18.029 [2024-07-23 15:23:13.451985] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:18.029 [2024-07-23 15:23:13.452013] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:18.029 pt3 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:18.288 [2024-07-23 15:23:13.631158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:18.288 [2024-07-23 15:23:13.631247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:18.288 [2024-07-23 15:23:13.631273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:29:18.288 [2024-07-23 15:23:13.631292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:18.288 [2024-07-23 15:23:13.631733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:18.288 [2024-07-23 15:23:13.631757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:18.288 [2024-07-23 15:23:13.631852] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:18.288 [2024-07-23 15:23:13.631881] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:18.288 [2024-07-23 15:23:13.632012] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:29:18.288 [2024-07-23 15:23:13.632025] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:18.288 [2024-07-23 15:23:13.632091] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:29:18.288 [2024-07-23 15:23:13.632726] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:29:18.288 [2024-07-23 15:23:13.632740] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:29:18.288 [2024-07-23 15:23:13.632856] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:18.288 pt4 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.288 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.546 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.546 "name": "raid_bdev1", 00:29:18.546 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:18.546 "strip_size_kb": 64, 00:29:18.546 "state": "online", 00:29:18.546 "raid_level": "raid5f", 00:29:18.546 "superblock": true, 00:29:18.546 "num_base_bdevs": 4, 00:29:18.546 "num_base_bdevs_discovered": 4, 00:29:18.546 "num_base_bdevs_operational": 4, 00:29:18.546 "base_bdevs_list": [ 00:29:18.546 { 00:29:18.546 "name": "pt1", 00:29:18.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:18.546 "is_configured": true, 00:29:18.546 "data_offset": 2048, 00:29:18.546 "data_size": 63488 00:29:18.546 }, 00:29:18.546 { 00:29:18.546 "name": "pt2", 00:29:18.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:18.547 "is_configured": true, 00:29:18.547 "data_offset": 2048, 00:29:18.547 "data_size": 63488 00:29:18.547 }, 00:29:18.547 { 00:29:18.547 "name": "pt3", 00:29:18.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:18.547 "is_configured": true, 00:29:18.547 "data_offset": 2048, 00:29:18.547 "data_size": 63488 00:29:18.547 }, 00:29:18.547 { 00:29:18.547 "name": "pt4", 00:29:18.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:18.547 "is_configured": true, 00:29:18.547 "data_offset": 2048, 00:29:18.547 "data_size": 63488 00:29:18.547 } 00:29:18.547 ] 00:29:18.547 }' 00:29:18.547 15:23:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.547 15:23:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:19.114 [2024-07-23 15:23:14.415571] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:19.114 "name": "raid_bdev1", 00:29:19.114 "aliases": [ 00:29:19.114 "cde3bca2-5e18-47be-a50a-4d627b518789" 00:29:19.114 ], 00:29:19.114 "product_name": "Raid Volume", 00:29:19.114 "block_size": 512, 00:29:19.114 "num_blocks": 190464, 00:29:19.114 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:19.114 "assigned_rate_limits": { 00:29:19.114 "rw_ios_per_sec": 0, 00:29:19.114 "rw_mbytes_per_sec": 0, 00:29:19.114 "r_mbytes_per_sec": 0, 00:29:19.114 "w_mbytes_per_sec": 0 00:29:19.114 }, 00:29:19.114 "claimed": false, 00:29:19.114 "zoned": false, 00:29:19.114 "supported_io_types": { 00:29:19.114 "read": true, 00:29:19.114 "write": true, 00:29:19.114 "unmap": false, 00:29:19.114 "flush": false, 00:29:19.114 "reset": true, 00:29:19.114 "nvme_admin": false, 00:29:19.114 "nvme_io": false, 00:29:19.114 "nvme_io_md": false, 00:29:19.114 "write_zeroes": true, 00:29:19.114 "zcopy": false, 00:29:19.114 "get_zone_info": false, 00:29:19.114 "zone_management": false, 00:29:19.114 "zone_append": false, 00:29:19.114 "compare": false, 00:29:19.114 "compare_and_write": false, 00:29:19.114 "abort": false, 00:29:19.114 "seek_hole": false, 00:29:19.114 "seek_data": false, 00:29:19.114 "copy": false, 00:29:19.114 "nvme_iov_md": false 00:29:19.114 }, 00:29:19.114 "driver_specific": { 00:29:19.114 "raid": { 00:29:19.114 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:19.114 "strip_size_kb": 64, 00:29:19.114 "state": "online", 00:29:19.114 "raid_level": "raid5f", 00:29:19.114 "superblock": true, 00:29:19.114 "num_base_bdevs": 4, 00:29:19.114 "num_base_bdevs_discovered": 4, 00:29:19.114 "num_base_bdevs_operational": 4, 00:29:19.114 "base_bdevs_list": [ 00:29:19.114 { 00:29:19.114 "name": "pt1", 00:29:19.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:19.114 "is_configured": true, 00:29:19.114 "data_offset": 2048, 00:29:19.114 "data_size": 63488 00:29:19.114 }, 00:29:19.114 { 00:29:19.114 "name": "pt2", 00:29:19.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:19.114 "is_configured": true, 00:29:19.114 "data_offset": 2048, 00:29:19.114 "data_size": 63488 00:29:19.114 }, 00:29:19.114 { 00:29:19.114 "name": "pt3", 00:29:19.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:19.114 "is_configured": true, 00:29:19.114 "data_offset": 2048, 00:29:19.114 "data_size": 63488 00:29:19.114 }, 00:29:19.114 { 00:29:19.114 "name": "pt4", 00:29:19.114 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:19.114 "is_configured": true, 00:29:19.114 "data_offset": 2048, 00:29:19.114 "data_size": 63488 00:29:19.114 } 00:29:19.114 ] 00:29:19.114 } 00:29:19.114 } 00:29:19.114 }' 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:19.114 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:19.114 pt2 00:29:19.114 pt3 00:29:19.114 pt4' 00:29:19.115 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:19.115 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:19.115 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:19.373 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:19.373 "name": "pt1", 00:29:19.373 "aliases": [ 00:29:19.373 "00000000-0000-0000-0000-000000000001" 00:29:19.373 ], 00:29:19.373 "product_name": "passthru", 00:29:19.373 "block_size": 512, 00:29:19.373 "num_blocks": 65536, 00:29:19.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:19.373 "assigned_rate_limits": { 00:29:19.373 "rw_ios_per_sec": 0, 00:29:19.373 "rw_mbytes_per_sec": 0, 00:29:19.373 "r_mbytes_per_sec": 0, 00:29:19.373 "w_mbytes_per_sec": 0 00:29:19.373 }, 00:29:19.373 "claimed": true, 00:29:19.373 "claim_type": "exclusive_write", 00:29:19.374 "zoned": false, 00:29:19.374 "supported_io_types": { 00:29:19.374 "read": true, 00:29:19.374 "write": true, 00:29:19.374 "unmap": true, 00:29:19.374 "flush": true, 00:29:19.374 "reset": true, 00:29:19.374 "nvme_admin": false, 00:29:19.374 "nvme_io": false, 00:29:19.374 "nvme_io_md": false, 00:29:19.374 "write_zeroes": true, 00:29:19.374 "zcopy": true, 00:29:19.374 "get_zone_info": false, 00:29:19.374 "zone_management": false, 00:29:19.374 "zone_append": false, 00:29:19.374 "compare": false, 00:29:19.374 "compare_and_write": false, 00:29:19.374 "abort": true, 00:29:19.374 "seek_hole": false, 00:29:19.374 "seek_data": false, 00:29:19.374 "copy": true, 00:29:19.374 "nvme_iov_md": false 00:29:19.374 }, 00:29:19.374 "memory_domains": [ 00:29:19.374 { 00:29:19.374 "dma_device_id": "system", 00:29:19.374 "dma_device_type": 1 00:29:19.374 }, 00:29:19.374 { 00:29:19.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.374 "dma_device_type": 2 00:29:19.374 } 00:29:19.374 ], 00:29:19.374 "driver_specific": { 00:29:19.374 "passthru": { 00:29:19.374 "name": "pt1", 00:29:19.374 "base_bdev_name": "malloc1" 00:29:19.374 } 00:29:19.374 } 00:29:19.374 }' 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:19.374 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:19.636 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:19.636 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:19.636 "name": "pt2", 00:29:19.636 "aliases": [ 00:29:19.636 "00000000-0000-0000-0000-000000000002" 00:29:19.636 ], 00:29:19.636 "product_name": "passthru", 00:29:19.636 "block_size": 512, 00:29:19.636 "num_blocks": 65536, 00:29:19.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:19.636 "assigned_rate_limits": { 00:29:19.636 "rw_ios_per_sec": 0, 00:29:19.636 "rw_mbytes_per_sec": 0, 00:29:19.636 "r_mbytes_per_sec": 0, 00:29:19.636 "w_mbytes_per_sec": 0 00:29:19.636 }, 00:29:19.636 "claimed": true, 00:29:19.636 "claim_type": "exclusive_write", 00:29:19.636 "zoned": false, 00:29:19.636 "supported_io_types": { 00:29:19.636 "read": true, 00:29:19.636 "write": true, 00:29:19.636 "unmap": true, 00:29:19.636 "flush": true, 00:29:19.636 "reset": true, 00:29:19.636 "nvme_admin": false, 00:29:19.636 "nvme_io": false, 00:29:19.636 "nvme_io_md": false, 00:29:19.636 "write_zeroes": true, 00:29:19.636 "zcopy": true, 00:29:19.636 "get_zone_info": false, 00:29:19.636 "zone_management": false, 00:29:19.636 "zone_append": false, 00:29:19.636 "compare": false, 00:29:19.636 "compare_and_write": false, 00:29:19.636 "abort": true, 00:29:19.636 "seek_hole": false, 00:29:19.636 "seek_data": false, 00:29:19.636 "copy": true, 00:29:19.636 "nvme_iov_md": false 00:29:19.636 }, 00:29:19.636 "memory_domains": [ 00:29:19.636 { 00:29:19.636 "dma_device_id": "system", 00:29:19.636 "dma_device_type": 1 00:29:19.636 }, 00:29:19.636 { 00:29:19.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.636 "dma_device_type": 2 00:29:19.636 } 00:29:19.636 ], 00:29:19.636 "driver_specific": { 00:29:19.636 "passthru": { 00:29:19.636 "name": "pt2", 00:29:19.636 "base_bdev_name": "malloc2" 00:29:19.636 } 00:29:19.636 } 00:29:19.636 }' 00:29:19.636 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:19.636 15:23:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:19.636 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:19.636 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:19.636 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:19.636 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:19.636 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:19.636 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:19.636 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:19.636 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:19.895 "name": "pt3", 00:29:19.895 "aliases": [ 00:29:19.895 "00000000-0000-0000-0000-000000000003" 00:29:19.895 ], 00:29:19.895 "product_name": "passthru", 00:29:19.895 "block_size": 512, 00:29:19.895 "num_blocks": 65536, 00:29:19.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:19.895 "assigned_rate_limits": { 00:29:19.895 "rw_ios_per_sec": 0, 00:29:19.895 "rw_mbytes_per_sec": 0, 00:29:19.895 "r_mbytes_per_sec": 0, 00:29:19.895 "w_mbytes_per_sec": 0 00:29:19.895 }, 00:29:19.895 "claimed": true, 00:29:19.895 "claim_type": "exclusive_write", 00:29:19.895 "zoned": false, 00:29:19.895 "supported_io_types": { 00:29:19.895 "read": true, 00:29:19.895 "write": true, 00:29:19.895 "unmap": true, 00:29:19.895 "flush": true, 00:29:19.895 "reset": true, 00:29:19.895 "nvme_admin": false, 00:29:19.895 "nvme_io": false, 00:29:19.895 "nvme_io_md": false, 00:29:19.895 "write_zeroes": true, 00:29:19.895 "zcopy": true, 00:29:19.895 "get_zone_info": false, 00:29:19.895 "zone_management": false, 00:29:19.895 "zone_append": false, 00:29:19.895 "compare": false, 00:29:19.895 "compare_and_write": false, 00:29:19.895 "abort": true, 00:29:19.895 "seek_hole": false, 00:29:19.895 "seek_data": false, 00:29:19.895 "copy": true, 00:29:19.895 "nvme_iov_md": false 00:29:19.895 }, 00:29:19.895 "memory_domains": [ 00:29:19.895 { 00:29:19.895 "dma_device_id": "system", 00:29:19.895 "dma_device_type": 1 00:29:19.895 }, 00:29:19.895 { 00:29:19.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.895 "dma_device_type": 2 00:29:19.895 } 00:29:19.895 ], 00:29:19.895 "driver_specific": { 00:29:19.895 "passthru": { 00:29:19.895 "name": "pt3", 00:29:19.895 "base_bdev_name": "malloc3" 00:29:19.895 } 00:29:19.895 } 00:29:19.895 }' 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:19.895 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:20.207 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:20.207 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:20.207 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:20.207 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:20.207 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:20.207 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:20.207 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:29:20.207 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:20.466 "name": "pt4", 00:29:20.466 "aliases": [ 00:29:20.466 "00000000-0000-0000-0000-000000000004" 00:29:20.466 ], 00:29:20.466 "product_name": "passthru", 00:29:20.466 "block_size": 512, 00:29:20.466 "num_blocks": 65536, 00:29:20.466 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:20.466 "assigned_rate_limits": { 00:29:20.466 "rw_ios_per_sec": 0, 00:29:20.466 "rw_mbytes_per_sec": 0, 00:29:20.466 "r_mbytes_per_sec": 0, 00:29:20.466 "w_mbytes_per_sec": 0 00:29:20.466 }, 00:29:20.466 "claimed": true, 00:29:20.466 "claim_type": "exclusive_write", 00:29:20.466 "zoned": false, 00:29:20.466 "supported_io_types": { 00:29:20.466 "read": true, 00:29:20.466 "write": true, 00:29:20.466 "unmap": true, 00:29:20.466 "flush": true, 00:29:20.466 "reset": true, 00:29:20.466 "nvme_admin": false, 00:29:20.466 "nvme_io": false, 00:29:20.466 "nvme_io_md": false, 00:29:20.466 "write_zeroes": true, 00:29:20.466 "zcopy": true, 00:29:20.466 "get_zone_info": false, 00:29:20.466 "zone_management": false, 00:29:20.466 "zone_append": false, 00:29:20.466 "compare": false, 00:29:20.466 "compare_and_write": false, 00:29:20.466 "abort": true, 00:29:20.466 "seek_hole": false, 00:29:20.466 "seek_data": false, 00:29:20.466 "copy": true, 00:29:20.466 "nvme_iov_md": false 00:29:20.466 }, 00:29:20.466 "memory_domains": [ 00:29:20.466 { 00:29:20.466 "dma_device_id": "system", 00:29:20.466 "dma_device_type": 1 00:29:20.466 }, 00:29:20.466 { 00:29:20.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.466 "dma_device_type": 2 00:29:20.466 } 00:29:20.466 ], 00:29:20.466 "driver_specific": { 00:29:20.466 "passthru": { 00:29:20.466 "name": "pt4", 00:29:20.466 "base_bdev_name": "malloc4" 00:29:20.466 } 00:29:20.466 } 00:29:20.466 }' 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:20.466 15:23:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:29:20.725 [2024-07-23 15:23:16.004009] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:20.725 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' cde3bca2-5e18-47be-a50a-4d627b518789 '!=' cde3bca2-5e18-47be-a50a-4d627b518789 ']' 00:29:20.725 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:29:20.725 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:20.725 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:20.725 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:20.984 [2024-07-23 15:23:16.275955] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.984 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.243 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.243 "name": "raid_bdev1", 00:29:21.243 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:21.243 "strip_size_kb": 64, 00:29:21.243 "state": "online", 00:29:21.243 "raid_level": "raid5f", 00:29:21.243 "superblock": true, 00:29:21.243 "num_base_bdevs": 4, 00:29:21.243 "num_base_bdevs_discovered": 3, 00:29:21.243 "num_base_bdevs_operational": 3, 00:29:21.243 "base_bdevs_list": [ 00:29:21.243 { 00:29:21.243 "name": null, 00:29:21.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.243 "is_configured": false, 00:29:21.243 "data_offset": 2048, 00:29:21.243 "data_size": 63488 00:29:21.243 }, 00:29:21.243 { 00:29:21.243 "name": "pt2", 00:29:21.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:21.243 "is_configured": true, 00:29:21.243 "data_offset": 2048, 00:29:21.243 "data_size": 63488 00:29:21.243 }, 00:29:21.243 { 00:29:21.243 "name": "pt3", 00:29:21.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:21.243 "is_configured": true, 00:29:21.243 "data_offset": 2048, 00:29:21.243 "data_size": 63488 00:29:21.243 }, 00:29:21.243 { 00:29:21.243 "name": "pt4", 00:29:21.243 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:21.243 "is_configured": true, 00:29:21.243 "data_offset": 2048, 00:29:21.243 "data_size": 63488 00:29:21.243 } 00:29:21.243 ] 00:29:21.243 }' 00:29:21.243 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.243 15:23:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.501 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:21.760 [2024-07-23 15:23:16.964021] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:21.760 [2024-07-23 15:23:16.964255] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:21.760 [2024-07-23 15:23:16.964417] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:21.760 [2024-07-23 15:23:16.964524] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:21.760 [2024-07-23 15:23:16.964749] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:29:21.760 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:29:21.760 15:23:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.019 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:29:22.019 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:29:22.019 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:29:22.019 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:22.019 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:22.277 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:22.277 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:22.277 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:22.277 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:22.277 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:22.277 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:22.535 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:22.535 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:22.535 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:29:22.535 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:22.535 15:23:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:22.793 [2024-07-23 15:23:18.028205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:22.793 [2024-07-23 15:23:18.028290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.793 [2024-07-23 15:23:18.028312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:29:22.793 [2024-07-23 15:23:18.028328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.793 [2024-07-23 15:23:18.030732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.793 [2024-07-23 15:23:18.030912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:22.793 [2024-07-23 15:23:18.031001] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:22.793 [2024-07-23 15:23:18.031067] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:22.793 pt2 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.793 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.052 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:23.052 "name": "raid_bdev1", 00:29:23.052 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:23.052 "strip_size_kb": 64, 00:29:23.052 "state": "configuring", 00:29:23.052 "raid_level": "raid5f", 00:29:23.052 "superblock": true, 00:29:23.052 "num_base_bdevs": 4, 00:29:23.052 "num_base_bdevs_discovered": 1, 00:29:23.052 "num_base_bdevs_operational": 3, 00:29:23.052 "base_bdevs_list": [ 00:29:23.052 { 00:29:23.052 "name": null, 00:29:23.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.052 "is_configured": false, 00:29:23.052 "data_offset": 2048, 00:29:23.052 "data_size": 63488 00:29:23.052 }, 00:29:23.052 { 00:29:23.052 "name": "pt2", 00:29:23.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:23.052 "is_configured": true, 00:29:23.052 "data_offset": 2048, 00:29:23.052 "data_size": 63488 00:29:23.052 }, 00:29:23.052 { 00:29:23.052 "name": null, 00:29:23.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:23.052 "is_configured": false, 00:29:23.052 "data_offset": 2048, 00:29:23.052 "data_size": 63488 00:29:23.052 }, 00:29:23.052 { 00:29:23.052 "name": null, 00:29:23.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:23.052 "is_configured": false, 00:29:23.052 "data_offset": 2048, 00:29:23.052 "data_size": 63488 00:29:23.052 } 00:29:23.052 ] 00:29:23.052 }' 00:29:23.052 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:23.052 15:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:23.310 [2024-07-23 15:23:18.700379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:23.310 [2024-07-23 15:23:18.700470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.310 [2024-07-23 15:23:18.700494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:29:23.310 [2024-07-23 15:23:18.700509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.310 [2024-07-23 15:23:18.700944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.310 [2024-07-23 15:23:18.700971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:23.310 [2024-07-23 15:23:18.701042] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:23.310 [2024-07-23 15:23:18.701068] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:23.310 pt3 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.310 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.568 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:23.568 "name": "raid_bdev1", 00:29:23.568 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:23.568 "strip_size_kb": 64, 00:29:23.568 "state": "configuring", 00:29:23.568 "raid_level": "raid5f", 00:29:23.568 "superblock": true, 00:29:23.568 "num_base_bdevs": 4, 00:29:23.568 "num_base_bdevs_discovered": 2, 00:29:23.568 "num_base_bdevs_operational": 3, 00:29:23.568 "base_bdevs_list": [ 00:29:23.568 { 00:29:23.568 "name": null, 00:29:23.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.568 "is_configured": false, 00:29:23.568 "data_offset": 2048, 00:29:23.568 "data_size": 63488 00:29:23.568 }, 00:29:23.568 { 00:29:23.568 "name": "pt2", 00:29:23.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:23.568 "is_configured": true, 00:29:23.568 "data_offset": 2048, 00:29:23.568 "data_size": 63488 00:29:23.568 }, 00:29:23.568 { 00:29:23.568 "name": "pt3", 00:29:23.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:23.568 "is_configured": true, 00:29:23.568 "data_offset": 2048, 00:29:23.568 "data_size": 63488 00:29:23.568 }, 00:29:23.568 { 00:29:23.568 "name": null, 00:29:23.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:23.568 "is_configured": false, 00:29:23.568 "data_offset": 2048, 00:29:23.568 "data_size": 63488 00:29:23.568 } 00:29:23.568 ] 00:29:23.568 }' 00:29:23.568 15:23:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:23.568 15:23:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.826 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:29:23.826 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:23.826 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:24.085 [2024-07-23 15:23:19.416511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:24.085 [2024-07-23 15:23:19.416593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:24.085 [2024-07-23 15:23:19.416622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:29:24.085 [2024-07-23 15:23:19.416637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:24.085 [2024-07-23 15:23:19.417074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:24.085 [2024-07-23 15:23:19.417100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:24.085 [2024-07-23 15:23:19.417174] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:24.085 [2024-07-23 15:23:19.417201] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:24.085 [2024-07-23 15:23:19.417313] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:29:24.085 [2024-07-23 15:23:19.417325] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:24.085 [2024-07-23 15:23:19.417392] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:29:24.085 [2024-07-23 15:23:19.418143] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:29:24.085 [2024-07-23 15:23:19.418165] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:29:24.085 [2024-07-23 15:23:19.418381] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.085 pt4 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.085 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.344 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:24.344 "name": "raid_bdev1", 00:29:24.344 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:24.344 "strip_size_kb": 64, 00:29:24.344 "state": "online", 00:29:24.344 "raid_level": "raid5f", 00:29:24.344 "superblock": true, 00:29:24.344 "num_base_bdevs": 4, 00:29:24.344 "num_base_bdevs_discovered": 3, 00:29:24.344 "num_base_bdevs_operational": 3, 00:29:24.345 "base_bdevs_list": [ 00:29:24.345 { 00:29:24.345 "name": null, 00:29:24.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.345 "is_configured": false, 00:29:24.345 "data_offset": 2048, 00:29:24.345 "data_size": 63488 00:29:24.345 }, 00:29:24.345 { 00:29:24.345 "name": "pt2", 00:29:24.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:24.345 "is_configured": true, 00:29:24.345 "data_offset": 2048, 00:29:24.345 "data_size": 63488 00:29:24.345 }, 00:29:24.345 { 00:29:24.345 "name": "pt3", 00:29:24.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:24.345 "is_configured": true, 00:29:24.345 "data_offset": 2048, 00:29:24.345 "data_size": 63488 00:29:24.345 }, 00:29:24.345 { 00:29:24.345 "name": "pt4", 00:29:24.345 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:24.345 "is_configured": true, 00:29:24.345 "data_offset": 2048, 00:29:24.345 "data_size": 63488 00:29:24.345 } 00:29:24.345 ] 00:29:24.345 }' 00:29:24.345 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:24.345 15:23:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.604 15:23:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:24.862 [2024-07-23 15:23:20.036684] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:24.862 [2024-07-23 15:23:20.036739] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:24.862 [2024-07-23 15:23:20.036852] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:24.862 [2024-07-23 15:23:20.036947] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:24.862 [2024-07-23 15:23:20.036964] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:29:24.862 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:29:24.862 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.120 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:29:25.120 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:29:25.120 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:29:25.120 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:29:25.120 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:25.120 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:25.378 [2024-07-23 15:23:20.672780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:25.378 [2024-07-23 15:23:20.672875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:25.378 [2024-07-23 15:23:20.672902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:29:25.378 [2024-07-23 15:23:20.672914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:25.378 [2024-07-23 15:23:20.675746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:25.378 [2024-07-23 15:23:20.675924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:25.378 [2024-07-23 15:23:20.676100] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:25.378 [2024-07-23 15:23:20.676221] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:25.378 [2024-07-23 15:23:20.676406] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:25.378 [2024-07-23 15:23:20.676530] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:25.379 [2024-07-23 15:23:20.676632] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:29:25.379 [2024-07-23 15:23:20.676776] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:25.379 [2024-07-23 15:23:20.677013] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:25.379 pt1 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.379 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.637 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:25.637 "name": "raid_bdev1", 00:29:25.637 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:25.637 "strip_size_kb": 64, 00:29:25.637 "state": "configuring", 00:29:25.637 "raid_level": "raid5f", 00:29:25.637 "superblock": true, 00:29:25.637 "num_base_bdevs": 4, 00:29:25.637 "num_base_bdevs_discovered": 2, 00:29:25.637 "num_base_bdevs_operational": 3, 00:29:25.637 "base_bdevs_list": [ 00:29:25.637 { 00:29:25.637 "name": null, 00:29:25.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.637 "is_configured": false, 00:29:25.637 "data_offset": 2048, 00:29:25.637 "data_size": 63488 00:29:25.637 }, 00:29:25.637 { 00:29:25.637 "name": "pt2", 00:29:25.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:25.637 "is_configured": true, 00:29:25.637 "data_offset": 2048, 00:29:25.637 "data_size": 63488 00:29:25.637 }, 00:29:25.637 { 00:29:25.637 "name": "pt3", 00:29:25.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:25.637 "is_configured": true, 00:29:25.637 "data_offset": 2048, 00:29:25.637 "data_size": 63488 00:29:25.637 }, 00:29:25.637 { 00:29:25.637 "name": null, 00:29:25.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:25.637 "is_configured": false, 00:29:25.637 "data_offset": 2048, 00:29:25.637 "data_size": 63488 00:29:25.637 } 00:29:25.637 ] 00:29:25.637 }' 00:29:25.637 15:23:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:25.637 15:23:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.895 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:29:25.895 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:25.895 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:29:25.896 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:26.154 [2024-07-23 15:23:21.561350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:26.154 [2024-07-23 15:23:21.561454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.154 [2024-07-23 15:23:21.561479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:29:26.154 [2024-07-23 15:23:21.561495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.154 [2024-07-23 15:23:21.561930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.154 [2024-07-23 15:23:21.561961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:26.154 [2024-07-23 15:23:21.562041] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:26.154 [2024-07-23 15:23:21.562075] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:26.154 [2024-07-23 15:23:21.562189] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:29:26.154 [2024-07-23 15:23:21.562205] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:26.154 [2024-07-23 15:23:21.562271] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:29:26.154 pt4 00:29:26.154 [2024-07-23 15:23:21.563086] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:29:26.154 [2024-07-23 15:23:21.563110] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:29:26.154 [2024-07-23 15:23:21.563298] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:26.154 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:26.413 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.413 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.413 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:26.413 "name": "raid_bdev1", 00:29:26.413 "uuid": "cde3bca2-5e18-47be-a50a-4d627b518789", 00:29:26.413 "strip_size_kb": 64, 00:29:26.413 "state": "online", 00:29:26.413 "raid_level": "raid5f", 00:29:26.413 "superblock": true, 00:29:26.413 "num_base_bdevs": 4, 00:29:26.413 "num_base_bdevs_discovered": 3, 00:29:26.413 "num_base_bdevs_operational": 3, 00:29:26.413 "base_bdevs_list": [ 00:29:26.413 { 00:29:26.413 "name": null, 00:29:26.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.413 "is_configured": false, 00:29:26.413 "data_offset": 2048, 00:29:26.413 "data_size": 63488 00:29:26.413 }, 00:29:26.413 { 00:29:26.413 "name": "pt2", 00:29:26.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:26.413 "is_configured": true, 00:29:26.413 "data_offset": 2048, 00:29:26.413 "data_size": 63488 00:29:26.413 }, 00:29:26.413 { 00:29:26.413 "name": "pt3", 00:29:26.413 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:26.413 "is_configured": true, 00:29:26.413 "data_offset": 2048, 00:29:26.413 "data_size": 63488 00:29:26.413 }, 00:29:26.413 { 00:29:26.413 "name": "pt4", 00:29:26.413 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:26.413 "is_configured": true, 00:29:26.413 "data_offset": 2048, 00:29:26.413 "data_size": 63488 00:29:26.413 } 00:29:26.413 ] 00:29:26.413 }' 00:29:26.413 15:23:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:26.413 15:23:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:26.670 15:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:26.670 15:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:26.928 15:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:29:26.928 15:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:26.928 15:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:29:27.187 [2024-07-23 15:23:22.453737] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' cde3bca2-5e18-47be-a50a-4d627b518789 '!=' cde3bca2-5e18-47be-a50a-4d627b518789 ']' 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 118054 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 118054 ']' 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 118054 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118054 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:27.187 killing process with pid 118054 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118054' 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 118054 00:29:27.187 [2024-07-23 15:23:22.516145] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:27.187 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 118054 00:29:27.187 [2024-07-23 15:23:22.516254] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:27.187 [2024-07-23 15:23:22.516334] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:27.187 [2024-07-23 15:23:22.516347] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:29:27.187 [2024-07-23 15:23:22.563720] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:27.446 ************************************ 00:29:27.446 END TEST raid5f_superblock_test 00:29:27.446 ************************************ 00:29:27.446 15:23:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:29:27.446 00:29:27.446 real 0m18.161s 00:29:27.446 user 0m31.404s 00:29:27.446 sys 0m4.106s 00:29:27.446 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.446 15:23:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.446 15:23:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:27.446 15:23:22 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:29:27.446 15:23:22 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:29:27.446 15:23:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:27.446 15:23:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.446 15:23:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:27.446 ************************************ 00:29:27.446 START TEST raid5f_rebuild_test 00:29:27.446 ************************************ 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 false false true 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.446 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=118782 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 118782 /var/tmp/spdk-raid.sock 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 118782 ']' 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:27.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.705 15:23:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.705 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:27.705 Zero copy mechanism will not be used. 00:29:27.705 [2024-07-23 15:23:22.943760] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:29:27.705 [2024-07-23 15:23:22.944006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118782 ] 00:29:27.705 [2024-07-23 15:23:23.097267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.964 [2024-07-23 15:23:23.151852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.964 [2024-07-23 15:23:23.204953] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:28.530 15:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.530 15:23:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:29:28.530 15:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:28.530 15:23:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:28.788 BaseBdev1_malloc 00:29:28.788 15:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:28.788 [2024-07-23 15:23:24.211855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:28.788 [2024-07-23 15:23:24.211953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:28.788 [2024-07-23 15:23:24.211988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:29:28.788 [2024-07-23 15:23:24.212007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:28.788 [2024-07-23 15:23:24.214549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:28.788 [2024-07-23 15:23:24.214598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:28.788 BaseBdev1 00:29:29.047 15:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:29.047 15:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:29.047 BaseBdev2_malloc 00:29:29.047 15:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:29.343 [2024-07-23 15:23:24.577516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:29.343 [2024-07-23 15:23:24.577594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:29.343 [2024-07-23 15:23:24.577625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:29:29.343 [2024-07-23 15:23:24.577638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:29.343 [2024-07-23 15:23:24.580166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:29.343 [2024-07-23 15:23:24.580209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:29.343 BaseBdev2 00:29:29.343 15:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:29.343 15:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:29.630 BaseBdev3_malloc 00:29:29.630 15:23:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:29.630 [2024-07-23 15:23:25.018062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:29.630 [2024-07-23 15:23:25.018139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:29.630 [2024-07-23 15:23:25.018171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:29:29.630 [2024-07-23 15:23:25.018183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:29.630 [2024-07-23 15:23:25.020643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:29.630 BaseBdev3 00:29:29.630 [2024-07-23 15:23:25.020856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:29.630 15:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:29.630 15:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:29.889 BaseBdev4_malloc 00:29:29.889 15:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:30.148 [2024-07-23 15:23:25.431749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:30.148 [2024-07-23 15:23:25.431845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:30.148 [2024-07-23 15:23:25.431880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:29:30.148 [2024-07-23 15:23:25.431893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:30.148 [2024-07-23 15:23:25.434435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:30.148 [2024-07-23 15:23:25.434481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:30.148 BaseBdev4 00:29:30.148 15:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:30.405 spare_malloc 00:29:30.405 15:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:30.405 spare_delay 00:29:30.405 15:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:30.663 [2024-07-23 15:23:25.957448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:30.663 [2024-07-23 15:23:25.957544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:30.663 [2024-07-23 15:23:25.957579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:29:30.663 [2024-07-23 15:23:25.957591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:30.663 [2024-07-23 15:23:25.960086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:30.663 [2024-07-23 15:23:25.960125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:30.663 spare 00:29:30.663 15:23:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:30.921 [2024-07-23 15:23:26.201577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:30.921 [2024-07-23 15:23:26.204127] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:30.921 [2024-07-23 15:23:26.204328] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:30.921 [2024-07-23 15:23:26.204409] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:30.921 [2024-07-23 15:23:26.204602] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:29:30.921 [2024-07-23 15:23:26.204647] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:29:30.921 [2024-07-23 15:23:26.204935] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:29:30.921 [2024-07-23 15:23:26.205700] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:29:30.921 [2024-07-23 15:23:26.205856] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:29:30.921 [2024-07-23 15:23:26.206191] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.921 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.179 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:31.179 "name": "raid_bdev1", 00:29:31.179 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:31.179 "strip_size_kb": 64, 00:29:31.179 "state": "online", 00:29:31.179 "raid_level": "raid5f", 00:29:31.179 "superblock": false, 00:29:31.179 "num_base_bdevs": 4, 00:29:31.179 "num_base_bdevs_discovered": 4, 00:29:31.179 "num_base_bdevs_operational": 4, 00:29:31.179 "base_bdevs_list": [ 00:29:31.179 { 00:29:31.179 "name": "BaseBdev1", 00:29:31.179 "uuid": "e931624f-4666-5267-9de6-65620ce3483e", 00:29:31.179 "is_configured": true, 00:29:31.179 "data_offset": 0, 00:29:31.179 "data_size": 65536 00:29:31.179 }, 00:29:31.179 { 00:29:31.179 "name": "BaseBdev2", 00:29:31.179 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:31.179 "is_configured": true, 00:29:31.179 "data_offset": 0, 00:29:31.179 "data_size": 65536 00:29:31.179 }, 00:29:31.179 { 00:29:31.179 "name": "BaseBdev3", 00:29:31.179 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:31.179 "is_configured": true, 00:29:31.179 "data_offset": 0, 00:29:31.179 "data_size": 65536 00:29:31.179 }, 00:29:31.179 { 00:29:31.179 "name": "BaseBdev4", 00:29:31.179 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:31.179 "is_configured": true, 00:29:31.179 "data_offset": 0, 00:29:31.179 "data_size": 65536 00:29:31.179 } 00:29:31.179 ] 00:29:31.179 }' 00:29:31.179 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:31.179 15:23:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:31.437 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:31.437 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:31.695 [2024-07-23 15:23:26.978520] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:31.695 15:23:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:29:31.695 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.695 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:32.002 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:32.260 [2024-07-23 15:23:27.502552] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:29:32.260 /dev/nbd0 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:32.260 1+0 records in 00:29:32.260 1+0 records out 00:29:32.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183752 s, 22.3 MB/s 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:29:32.260 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:29:32.826 512+0 records in 00:29:32.826 512+0 records out 00:29:32.826 100663296 bytes (101 MB, 96 MiB) copied, 0.442105 s, 228 MB/s 00:29:32.826 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:32.826 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:32.826 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:32.826 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:32.826 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:32.826 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.826 15:23:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:32.826 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:32.826 [2024-07-23 15:23:28.247609] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.826 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:32.826 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:32.826 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.826 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.826 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:33.084 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:33.084 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:33.084 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:33.084 [2024-07-23 15:23:28.499838] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.343 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.602 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:33.602 "name": "raid_bdev1", 00:29:33.602 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:33.602 "strip_size_kb": 64, 00:29:33.602 "state": "online", 00:29:33.602 "raid_level": "raid5f", 00:29:33.602 "superblock": false, 00:29:33.602 "num_base_bdevs": 4, 00:29:33.602 "num_base_bdevs_discovered": 3, 00:29:33.602 "num_base_bdevs_operational": 3, 00:29:33.602 "base_bdevs_list": [ 00:29:33.602 { 00:29:33.602 "name": null, 00:29:33.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.602 "is_configured": false, 00:29:33.602 "data_offset": 0, 00:29:33.602 "data_size": 65536 00:29:33.602 }, 00:29:33.602 { 00:29:33.602 "name": "BaseBdev2", 00:29:33.602 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:33.602 "is_configured": true, 00:29:33.602 "data_offset": 0, 00:29:33.602 "data_size": 65536 00:29:33.602 }, 00:29:33.602 { 00:29:33.602 "name": "BaseBdev3", 00:29:33.602 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:33.602 "is_configured": true, 00:29:33.602 "data_offset": 0, 00:29:33.602 "data_size": 65536 00:29:33.602 }, 00:29:33.602 { 00:29:33.602 "name": "BaseBdev4", 00:29:33.602 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:33.602 "is_configured": true, 00:29:33.602 "data_offset": 0, 00:29:33.602 "data_size": 65536 00:29:33.602 } 00:29:33.602 ] 00:29:33.602 }' 00:29:33.602 15:23:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:33.602 15:23:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.860 15:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:34.118 [2024-07-23 15:23:29.300051] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:34.118 [2024-07-23 15:23:29.303711] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000027990 00:29:34.118 [2024-07-23 15:23:29.306387] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:34.118 15:23:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:35.053 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:35.053 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:35.053 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:35.053 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:35.053 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:35.053 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.053 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.311 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:35.311 "name": "raid_bdev1", 00:29:35.311 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:35.311 "strip_size_kb": 64, 00:29:35.311 "state": "online", 00:29:35.311 "raid_level": "raid5f", 00:29:35.311 "superblock": false, 00:29:35.311 "num_base_bdevs": 4, 00:29:35.311 "num_base_bdevs_discovered": 4, 00:29:35.311 "num_base_bdevs_operational": 4, 00:29:35.311 "process": { 00:29:35.311 "type": "rebuild", 00:29:35.311 "target": "spare", 00:29:35.311 "progress": { 00:29:35.311 "blocks": 23040, 00:29:35.311 "percent": 11 00:29:35.311 } 00:29:35.311 }, 00:29:35.311 "base_bdevs_list": [ 00:29:35.311 { 00:29:35.311 "name": "spare", 00:29:35.311 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:35.311 "is_configured": true, 00:29:35.311 "data_offset": 0, 00:29:35.311 "data_size": 65536 00:29:35.311 }, 00:29:35.311 { 00:29:35.311 "name": "BaseBdev2", 00:29:35.311 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:35.311 "is_configured": true, 00:29:35.311 "data_offset": 0, 00:29:35.311 "data_size": 65536 00:29:35.311 }, 00:29:35.311 { 00:29:35.311 "name": "BaseBdev3", 00:29:35.311 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:35.311 "is_configured": true, 00:29:35.311 "data_offset": 0, 00:29:35.311 "data_size": 65536 00:29:35.311 }, 00:29:35.311 { 00:29:35.311 "name": "BaseBdev4", 00:29:35.311 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:35.311 "is_configured": true, 00:29:35.311 "data_offset": 0, 00:29:35.311 "data_size": 65536 00:29:35.311 } 00:29:35.311 ] 00:29:35.311 }' 00:29:35.311 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:35.311 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:35.311 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:35.311 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:35.311 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:35.570 [2024-07-23 15:23:30.791996] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:35.570 [2024-07-23 15:23:30.818500] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:35.570 [2024-07-23 15:23:30.818581] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.570 [2024-07-23 15:23:30.818604] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:35.570 [2024-07-23 15:23:30.818621] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.570 15:23:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.828 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:35.828 "name": "raid_bdev1", 00:29:35.828 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:35.828 "strip_size_kb": 64, 00:29:35.828 "state": "online", 00:29:35.828 "raid_level": "raid5f", 00:29:35.828 "superblock": false, 00:29:35.828 "num_base_bdevs": 4, 00:29:35.828 "num_base_bdevs_discovered": 3, 00:29:35.828 "num_base_bdevs_operational": 3, 00:29:35.828 "base_bdevs_list": [ 00:29:35.828 { 00:29:35.828 "name": null, 00:29:35.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.828 "is_configured": false, 00:29:35.828 "data_offset": 0, 00:29:35.828 "data_size": 65536 00:29:35.828 }, 00:29:35.828 { 00:29:35.828 "name": "BaseBdev2", 00:29:35.828 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:35.828 "is_configured": true, 00:29:35.828 "data_offset": 0, 00:29:35.828 "data_size": 65536 00:29:35.828 }, 00:29:35.828 { 00:29:35.828 "name": "BaseBdev3", 00:29:35.828 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:35.828 "is_configured": true, 00:29:35.828 "data_offset": 0, 00:29:35.828 "data_size": 65536 00:29:35.828 }, 00:29:35.828 { 00:29:35.828 "name": "BaseBdev4", 00:29:35.828 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:35.828 "is_configured": true, 00:29:35.828 "data_offset": 0, 00:29:35.828 "data_size": 65536 00:29:35.828 } 00:29:35.828 ] 00:29:35.828 }' 00:29:35.828 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:35.828 15:23:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.086 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:36.086 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:36.086 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:36.086 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:36.086 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:36.086 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.086 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.345 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.345 "name": "raid_bdev1", 00:29:36.345 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:36.345 "strip_size_kb": 64, 00:29:36.345 "state": "online", 00:29:36.345 "raid_level": "raid5f", 00:29:36.345 "superblock": false, 00:29:36.345 "num_base_bdevs": 4, 00:29:36.345 "num_base_bdevs_discovered": 3, 00:29:36.345 "num_base_bdevs_operational": 3, 00:29:36.345 "base_bdevs_list": [ 00:29:36.345 { 00:29:36.345 "name": null, 00:29:36.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:36.345 "is_configured": false, 00:29:36.345 "data_offset": 0, 00:29:36.345 "data_size": 65536 00:29:36.345 }, 00:29:36.345 { 00:29:36.345 "name": "BaseBdev2", 00:29:36.345 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:36.345 "is_configured": true, 00:29:36.345 "data_offset": 0, 00:29:36.345 "data_size": 65536 00:29:36.345 }, 00:29:36.345 { 00:29:36.345 "name": "BaseBdev3", 00:29:36.345 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:36.345 "is_configured": true, 00:29:36.345 "data_offset": 0, 00:29:36.345 "data_size": 65536 00:29:36.345 }, 00:29:36.345 { 00:29:36.345 "name": "BaseBdev4", 00:29:36.345 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:36.345 "is_configured": true, 00:29:36.345 "data_offset": 0, 00:29:36.345 "data_size": 65536 00:29:36.345 } 00:29:36.345 ] 00:29:36.345 }' 00:29:36.345 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:36.345 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:36.345 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:36.345 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:36.345 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:36.345 [2024-07-23 15:23:31.729151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:36.345 [2024-07-23 15:23:31.732941] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000027a60 00:29:36.345 [2024-07-23 15:23:31.735414] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:36.345 15:23:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.721 "name": "raid_bdev1", 00:29:37.721 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:37.721 "strip_size_kb": 64, 00:29:37.721 "state": "online", 00:29:37.721 "raid_level": "raid5f", 00:29:37.721 "superblock": false, 00:29:37.721 "num_base_bdevs": 4, 00:29:37.721 "num_base_bdevs_discovered": 4, 00:29:37.721 "num_base_bdevs_operational": 4, 00:29:37.721 "process": { 00:29:37.721 "type": "rebuild", 00:29:37.721 "target": "spare", 00:29:37.721 "progress": { 00:29:37.721 "blocks": 23040, 00:29:37.721 "percent": 11 00:29:37.721 } 00:29:37.721 }, 00:29:37.721 "base_bdevs_list": [ 00:29:37.721 { 00:29:37.721 "name": "spare", 00:29:37.721 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:37.721 "is_configured": true, 00:29:37.721 "data_offset": 0, 00:29:37.721 "data_size": 65536 00:29:37.721 }, 00:29:37.721 { 00:29:37.721 "name": "BaseBdev2", 00:29:37.721 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:37.721 "is_configured": true, 00:29:37.721 "data_offset": 0, 00:29:37.721 "data_size": 65536 00:29:37.721 }, 00:29:37.721 { 00:29:37.721 "name": "BaseBdev3", 00:29:37.721 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:37.721 "is_configured": true, 00:29:37.721 "data_offset": 0, 00:29:37.721 "data_size": 65536 00:29:37.721 }, 00:29:37.721 { 00:29:37.721 "name": "BaseBdev4", 00:29:37.721 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:37.721 "is_configured": true, 00:29:37.721 "data_offset": 0, 00:29:37.721 "data_size": 65536 00:29:37.721 } 00:29:37.721 ] 00:29:37.721 }' 00:29:37.721 15:23:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=933 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.721 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.980 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.980 "name": "raid_bdev1", 00:29:37.980 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:37.980 "strip_size_kb": 64, 00:29:37.980 "state": "online", 00:29:37.980 "raid_level": "raid5f", 00:29:37.980 "superblock": false, 00:29:37.980 "num_base_bdevs": 4, 00:29:37.980 "num_base_bdevs_discovered": 4, 00:29:37.980 "num_base_bdevs_operational": 4, 00:29:37.980 "process": { 00:29:37.980 "type": "rebuild", 00:29:37.980 "target": "spare", 00:29:37.980 "progress": { 00:29:37.980 "blocks": 26880, 00:29:37.980 "percent": 13 00:29:37.980 } 00:29:37.980 }, 00:29:37.980 "base_bdevs_list": [ 00:29:37.980 { 00:29:37.980 "name": "spare", 00:29:37.980 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:37.980 "is_configured": true, 00:29:37.980 "data_offset": 0, 00:29:37.980 "data_size": 65536 00:29:37.980 }, 00:29:37.980 { 00:29:37.980 "name": "BaseBdev2", 00:29:37.980 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:37.980 "is_configured": true, 00:29:37.980 "data_offset": 0, 00:29:37.980 "data_size": 65536 00:29:37.980 }, 00:29:37.980 { 00:29:37.980 "name": "BaseBdev3", 00:29:37.980 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:37.980 "is_configured": true, 00:29:37.980 "data_offset": 0, 00:29:37.980 "data_size": 65536 00:29:37.980 }, 00:29:37.980 { 00:29:37.980 "name": "BaseBdev4", 00:29:37.980 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:37.980 "is_configured": true, 00:29:37.980 "data_offset": 0, 00:29:37.980 "data_size": 65536 00:29:37.980 } 00:29:37.980 ] 00:29:37.980 }' 00:29:37.980 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:37.980 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:37.980 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:37.980 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.980 15:23:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:38.956 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:38.956 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:38.956 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:38.956 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:38.956 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:38.956 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:38.956 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.956 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.225 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.225 "name": "raid_bdev1", 00:29:39.225 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:39.225 "strip_size_kb": 64, 00:29:39.225 "state": "online", 00:29:39.225 "raid_level": "raid5f", 00:29:39.225 "superblock": false, 00:29:39.225 "num_base_bdevs": 4, 00:29:39.225 "num_base_bdevs_discovered": 4, 00:29:39.225 "num_base_bdevs_operational": 4, 00:29:39.225 "process": { 00:29:39.225 "type": "rebuild", 00:29:39.225 "target": "spare", 00:29:39.225 "progress": { 00:29:39.225 "blocks": 51840, 00:29:39.225 "percent": 26 00:29:39.225 } 00:29:39.225 }, 00:29:39.225 "base_bdevs_list": [ 00:29:39.225 { 00:29:39.225 "name": "spare", 00:29:39.225 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:39.225 "is_configured": true, 00:29:39.225 "data_offset": 0, 00:29:39.225 "data_size": 65536 00:29:39.225 }, 00:29:39.225 { 00:29:39.225 "name": "BaseBdev2", 00:29:39.225 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:39.225 "is_configured": true, 00:29:39.225 "data_offset": 0, 00:29:39.225 "data_size": 65536 00:29:39.225 }, 00:29:39.225 { 00:29:39.225 "name": "BaseBdev3", 00:29:39.225 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:39.225 "is_configured": true, 00:29:39.225 "data_offset": 0, 00:29:39.225 "data_size": 65536 00:29:39.225 }, 00:29:39.225 { 00:29:39.225 "name": "BaseBdev4", 00:29:39.225 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:39.225 "is_configured": true, 00:29:39.225 "data_offset": 0, 00:29:39.225 "data_size": 65536 00:29:39.225 } 00:29:39.225 ] 00:29:39.225 }' 00:29:39.225 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:39.225 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:39.225 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:39.225 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:39.225 15:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:40.160 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:40.160 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:40.160 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.160 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:40.160 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:40.160 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.160 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.160 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.418 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.418 "name": "raid_bdev1", 00:29:40.418 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:40.418 "strip_size_kb": 64, 00:29:40.418 "state": "online", 00:29:40.418 "raid_level": "raid5f", 00:29:40.418 "superblock": false, 00:29:40.418 "num_base_bdevs": 4, 00:29:40.418 "num_base_bdevs_discovered": 4, 00:29:40.418 "num_base_bdevs_operational": 4, 00:29:40.418 "process": { 00:29:40.418 "type": "rebuild", 00:29:40.418 "target": "spare", 00:29:40.418 "progress": { 00:29:40.418 "blocks": 74880, 00:29:40.418 "percent": 38 00:29:40.418 } 00:29:40.418 }, 00:29:40.418 "base_bdevs_list": [ 00:29:40.418 { 00:29:40.418 "name": "spare", 00:29:40.418 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:40.418 "is_configured": true, 00:29:40.418 "data_offset": 0, 00:29:40.418 "data_size": 65536 00:29:40.418 }, 00:29:40.418 { 00:29:40.418 "name": "BaseBdev2", 00:29:40.418 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:40.418 "is_configured": true, 00:29:40.418 "data_offset": 0, 00:29:40.418 "data_size": 65536 00:29:40.418 }, 00:29:40.418 { 00:29:40.418 "name": "BaseBdev3", 00:29:40.418 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:40.418 "is_configured": true, 00:29:40.418 "data_offset": 0, 00:29:40.418 "data_size": 65536 00:29:40.418 }, 00:29:40.418 { 00:29:40.418 "name": "BaseBdev4", 00:29:40.418 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:40.418 "is_configured": true, 00:29:40.418 "data_offset": 0, 00:29:40.418 "data_size": 65536 00:29:40.418 } 00:29:40.418 ] 00:29:40.418 }' 00:29:40.418 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.418 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:40.418 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.419 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:40.419 15:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:41.794 "name": "raid_bdev1", 00:29:41.794 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:41.794 "strip_size_kb": 64, 00:29:41.794 "state": "online", 00:29:41.794 "raid_level": "raid5f", 00:29:41.794 "superblock": false, 00:29:41.794 "num_base_bdevs": 4, 00:29:41.794 "num_base_bdevs_discovered": 4, 00:29:41.794 "num_base_bdevs_operational": 4, 00:29:41.794 "process": { 00:29:41.794 "type": "rebuild", 00:29:41.794 "target": "spare", 00:29:41.794 "progress": { 00:29:41.794 "blocks": 97920, 00:29:41.794 "percent": 49 00:29:41.794 } 00:29:41.794 }, 00:29:41.794 "base_bdevs_list": [ 00:29:41.794 { 00:29:41.794 "name": "spare", 00:29:41.794 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:41.794 "is_configured": true, 00:29:41.794 "data_offset": 0, 00:29:41.794 "data_size": 65536 00:29:41.794 }, 00:29:41.794 { 00:29:41.794 "name": "BaseBdev2", 00:29:41.794 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:41.794 "is_configured": true, 00:29:41.794 "data_offset": 0, 00:29:41.794 "data_size": 65536 00:29:41.794 }, 00:29:41.794 { 00:29:41.794 "name": "BaseBdev3", 00:29:41.794 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:41.794 "is_configured": true, 00:29:41.794 "data_offset": 0, 00:29:41.794 "data_size": 65536 00:29:41.794 }, 00:29:41.794 { 00:29:41.794 "name": "BaseBdev4", 00:29:41.794 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:41.794 "is_configured": true, 00:29:41.794 "data_offset": 0, 00:29:41.794 "data_size": 65536 00:29:41.794 } 00:29:41.794 ] 00:29:41.794 }' 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:41.794 15:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:42.729 15:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:42.729 15:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:42.729 15:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:42.729 15:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:42.729 15:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:42.729 15:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:42.729 15:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.729 15:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.988 15:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.988 "name": "raid_bdev1", 00:29:42.988 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:42.988 "strip_size_kb": 64, 00:29:42.988 "state": "online", 00:29:42.988 "raid_level": "raid5f", 00:29:42.988 "superblock": false, 00:29:42.988 "num_base_bdevs": 4, 00:29:42.988 "num_base_bdevs_discovered": 4, 00:29:42.988 "num_base_bdevs_operational": 4, 00:29:42.988 "process": { 00:29:42.988 "type": "rebuild", 00:29:42.988 "target": "spare", 00:29:42.988 "progress": { 00:29:42.988 "blocks": 122880, 00:29:42.988 "percent": 62 00:29:42.988 } 00:29:42.988 }, 00:29:42.988 "base_bdevs_list": [ 00:29:42.988 { 00:29:42.988 "name": "spare", 00:29:42.988 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:42.988 "is_configured": true, 00:29:42.988 "data_offset": 0, 00:29:42.988 "data_size": 65536 00:29:42.988 }, 00:29:42.988 { 00:29:42.988 "name": "BaseBdev2", 00:29:42.988 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:42.988 "is_configured": true, 00:29:42.988 "data_offset": 0, 00:29:42.988 "data_size": 65536 00:29:42.988 }, 00:29:42.988 { 00:29:42.988 "name": "BaseBdev3", 00:29:42.988 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:42.988 "is_configured": true, 00:29:42.988 "data_offset": 0, 00:29:42.988 "data_size": 65536 00:29:42.988 }, 00:29:42.988 { 00:29:42.988 "name": "BaseBdev4", 00:29:42.988 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:42.988 "is_configured": true, 00:29:42.988 "data_offset": 0, 00:29:42.988 "data_size": 65536 00:29:42.988 } 00:29:42.988 ] 00:29:42.988 }' 00:29:42.988 15:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:42.988 15:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:42.988 15:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:42.988 15:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:42.988 15:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:43.922 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:43.922 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:43.922 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.922 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:43.922 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:43.922 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.922 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.922 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.181 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.181 "name": "raid_bdev1", 00:29:44.181 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:44.181 "strip_size_kb": 64, 00:29:44.181 "state": "online", 00:29:44.181 "raid_level": "raid5f", 00:29:44.181 "superblock": false, 00:29:44.181 "num_base_bdevs": 4, 00:29:44.181 "num_base_bdevs_discovered": 4, 00:29:44.181 "num_base_bdevs_operational": 4, 00:29:44.181 "process": { 00:29:44.181 "type": "rebuild", 00:29:44.181 "target": "spare", 00:29:44.181 "progress": { 00:29:44.181 "blocks": 145920, 00:29:44.181 "percent": 74 00:29:44.181 } 00:29:44.181 }, 00:29:44.181 "base_bdevs_list": [ 00:29:44.181 { 00:29:44.181 "name": "spare", 00:29:44.181 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:44.181 "is_configured": true, 00:29:44.181 "data_offset": 0, 00:29:44.181 "data_size": 65536 00:29:44.181 }, 00:29:44.181 { 00:29:44.181 "name": "BaseBdev2", 00:29:44.181 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:44.181 "is_configured": true, 00:29:44.181 "data_offset": 0, 00:29:44.181 "data_size": 65536 00:29:44.181 }, 00:29:44.181 { 00:29:44.181 "name": "BaseBdev3", 00:29:44.181 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:44.181 "is_configured": true, 00:29:44.181 "data_offset": 0, 00:29:44.181 "data_size": 65536 00:29:44.181 }, 00:29:44.181 { 00:29:44.181 "name": "BaseBdev4", 00:29:44.181 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:44.181 "is_configured": true, 00:29:44.181 "data_offset": 0, 00:29:44.181 "data_size": 65536 00:29:44.181 } 00:29:44.181 ] 00:29:44.181 }' 00:29:44.181 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.181 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.181 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.181 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.181 15:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:45.117 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:45.117 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:45.117 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.117 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:45.117 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:45.117 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.117 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.117 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.375 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:45.375 "name": "raid_bdev1", 00:29:45.375 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:45.375 "strip_size_kb": 64, 00:29:45.375 "state": "online", 00:29:45.375 "raid_level": "raid5f", 00:29:45.375 "superblock": false, 00:29:45.375 "num_base_bdevs": 4, 00:29:45.375 "num_base_bdevs_discovered": 4, 00:29:45.375 "num_base_bdevs_operational": 4, 00:29:45.375 "process": { 00:29:45.375 "type": "rebuild", 00:29:45.375 "target": "spare", 00:29:45.375 "progress": { 00:29:45.375 "blocks": 168960, 00:29:45.375 "percent": 85 00:29:45.375 } 00:29:45.375 }, 00:29:45.375 "base_bdevs_list": [ 00:29:45.375 { 00:29:45.375 "name": "spare", 00:29:45.375 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:45.375 "is_configured": true, 00:29:45.375 "data_offset": 0, 00:29:45.375 "data_size": 65536 00:29:45.375 }, 00:29:45.375 { 00:29:45.375 "name": "BaseBdev2", 00:29:45.375 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:45.375 "is_configured": true, 00:29:45.375 "data_offset": 0, 00:29:45.375 "data_size": 65536 00:29:45.375 }, 00:29:45.375 { 00:29:45.375 "name": "BaseBdev3", 00:29:45.375 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:45.375 "is_configured": true, 00:29:45.375 "data_offset": 0, 00:29:45.375 "data_size": 65536 00:29:45.375 }, 00:29:45.375 { 00:29:45.375 "name": "BaseBdev4", 00:29:45.375 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:45.375 "is_configured": true, 00:29:45.375 "data_offset": 0, 00:29:45.375 "data_size": 65536 00:29:45.375 } 00:29:45.375 ] 00:29:45.375 }' 00:29:45.375 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:45.375 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:45.375 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:45.375 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:45.375 15:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.751 "name": "raid_bdev1", 00:29:46.751 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:46.751 "strip_size_kb": 64, 00:29:46.751 "state": "online", 00:29:46.751 "raid_level": "raid5f", 00:29:46.751 "superblock": false, 00:29:46.751 "num_base_bdevs": 4, 00:29:46.751 "num_base_bdevs_discovered": 4, 00:29:46.751 "num_base_bdevs_operational": 4, 00:29:46.751 "process": { 00:29:46.751 "type": "rebuild", 00:29:46.751 "target": "spare", 00:29:46.751 "progress": { 00:29:46.751 "blocks": 193920, 00:29:46.751 "percent": 98 00:29:46.751 } 00:29:46.751 }, 00:29:46.751 "base_bdevs_list": [ 00:29:46.751 { 00:29:46.751 "name": "spare", 00:29:46.751 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:46.751 "is_configured": true, 00:29:46.751 "data_offset": 0, 00:29:46.751 "data_size": 65536 00:29:46.751 }, 00:29:46.751 { 00:29:46.751 "name": "BaseBdev2", 00:29:46.751 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:46.751 "is_configured": true, 00:29:46.751 "data_offset": 0, 00:29:46.751 "data_size": 65536 00:29:46.751 }, 00:29:46.751 { 00:29:46.751 "name": "BaseBdev3", 00:29:46.751 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:46.751 "is_configured": true, 00:29:46.751 "data_offset": 0, 00:29:46.751 "data_size": 65536 00:29:46.751 }, 00:29:46.751 { 00:29:46.751 "name": "BaseBdev4", 00:29:46.751 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:46.751 "is_configured": true, 00:29:46.751 "data_offset": 0, 00:29:46.751 "data_size": 65536 00:29:46.751 } 00:29:46.751 ] 00:29:46.751 }' 00:29:46.751 15:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.751 15:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:46.751 15:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.751 15:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:46.751 15:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:46.751 [2024-07-23 15:23:42.112209] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:46.751 [2024-07-23 15:23:42.112290] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:46.751 [2024-07-23 15:23:42.112357] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.686 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:47.686 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:47.686 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.686 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:47.686 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:47.686 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.686 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.686 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.945 "name": "raid_bdev1", 00:29:47.945 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:47.945 "strip_size_kb": 64, 00:29:47.945 "state": "online", 00:29:47.945 "raid_level": "raid5f", 00:29:47.945 "superblock": false, 00:29:47.945 "num_base_bdevs": 4, 00:29:47.945 "num_base_bdevs_discovered": 4, 00:29:47.945 "num_base_bdevs_operational": 4, 00:29:47.945 "base_bdevs_list": [ 00:29:47.945 { 00:29:47.945 "name": "spare", 00:29:47.945 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:47.945 "is_configured": true, 00:29:47.945 "data_offset": 0, 00:29:47.945 "data_size": 65536 00:29:47.945 }, 00:29:47.945 { 00:29:47.945 "name": "BaseBdev2", 00:29:47.945 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:47.945 "is_configured": true, 00:29:47.945 "data_offset": 0, 00:29:47.945 "data_size": 65536 00:29:47.945 }, 00:29:47.945 { 00:29:47.945 "name": "BaseBdev3", 00:29:47.945 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:47.945 "is_configured": true, 00:29:47.945 "data_offset": 0, 00:29:47.945 "data_size": 65536 00:29:47.945 }, 00:29:47.945 { 00:29:47.945 "name": "BaseBdev4", 00:29:47.945 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:47.945 "is_configured": true, 00:29:47.945 "data_offset": 0, 00:29:47.945 "data_size": 65536 00:29:47.945 } 00:29:47.945 ] 00:29:47.945 }' 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.945 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.204 "name": "raid_bdev1", 00:29:48.204 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:48.204 "strip_size_kb": 64, 00:29:48.204 "state": "online", 00:29:48.204 "raid_level": "raid5f", 00:29:48.204 "superblock": false, 00:29:48.204 "num_base_bdevs": 4, 00:29:48.204 "num_base_bdevs_discovered": 4, 00:29:48.204 "num_base_bdevs_operational": 4, 00:29:48.204 "base_bdevs_list": [ 00:29:48.204 { 00:29:48.204 "name": "spare", 00:29:48.204 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:48.204 "is_configured": true, 00:29:48.204 "data_offset": 0, 00:29:48.204 "data_size": 65536 00:29:48.204 }, 00:29:48.204 { 00:29:48.204 "name": "BaseBdev2", 00:29:48.204 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:48.204 "is_configured": true, 00:29:48.204 "data_offset": 0, 00:29:48.204 "data_size": 65536 00:29:48.204 }, 00:29:48.204 { 00:29:48.204 "name": "BaseBdev3", 00:29:48.204 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:48.204 "is_configured": true, 00:29:48.204 "data_offset": 0, 00:29:48.204 "data_size": 65536 00:29:48.204 }, 00:29:48.204 { 00:29:48.204 "name": "BaseBdev4", 00:29:48.204 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:48.204 "is_configured": true, 00:29:48.204 "data_offset": 0, 00:29:48.204 "data_size": 65536 00:29:48.204 } 00:29:48.204 ] 00:29:48.204 }' 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.204 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.462 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:48.462 "name": "raid_bdev1", 00:29:48.462 "uuid": "0d469284-6754-4404-84cc-255730826925", 00:29:48.462 "strip_size_kb": 64, 00:29:48.462 "state": "online", 00:29:48.462 "raid_level": "raid5f", 00:29:48.462 "superblock": false, 00:29:48.462 "num_base_bdevs": 4, 00:29:48.462 "num_base_bdevs_discovered": 4, 00:29:48.462 "num_base_bdevs_operational": 4, 00:29:48.462 "base_bdevs_list": [ 00:29:48.462 { 00:29:48.462 "name": "spare", 00:29:48.462 "uuid": "b8626691-d40d-50dc-b899-0d55a48bad5c", 00:29:48.462 "is_configured": true, 00:29:48.462 "data_offset": 0, 00:29:48.462 "data_size": 65536 00:29:48.462 }, 00:29:48.462 { 00:29:48.462 "name": "BaseBdev2", 00:29:48.462 "uuid": "a5fe38c8-91de-5083-97f4-7d905408ee3d", 00:29:48.462 "is_configured": true, 00:29:48.462 "data_offset": 0, 00:29:48.462 "data_size": 65536 00:29:48.462 }, 00:29:48.462 { 00:29:48.462 "name": "BaseBdev3", 00:29:48.462 "uuid": "81eb37d2-aae1-52d1-821f-956afd98f3cb", 00:29:48.462 "is_configured": true, 00:29:48.462 "data_offset": 0, 00:29:48.462 "data_size": 65536 00:29:48.462 }, 00:29:48.462 { 00:29:48.462 "name": "BaseBdev4", 00:29:48.462 "uuid": "6d11d3bf-bf39-535c-825f-d77031f502f8", 00:29:48.462 "is_configured": true, 00:29:48.462 "data_offset": 0, 00:29:48.462 "data_size": 65536 00:29:48.462 } 00:29:48.462 ] 00:29:48.462 }' 00:29:48.462 15:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:48.462 15:23:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.042 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:49.042 [2024-07-23 15:23:44.427027] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:49.042 [2024-07-23 15:23:44.427272] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:49.042 [2024-07-23 15:23:44.427384] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:49.042 [2024-07-23 15:23:44.427481] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:49.042 [2024-07-23 15:23:44.427505] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:29:49.042 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.042 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:49.317 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:49.575 /dev/nbd0 00:29:49.575 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:49.575 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:49.575 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.576 1+0 records in 00:29:49.576 1+0 records out 00:29:49.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202061 s, 20.3 MB/s 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:49.576 15:23:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:49.834 /dev/nbd1 00:29:49.834 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:49.834 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.835 1+0 records in 00:29:49.835 1+0 records out 00:29:49.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334565 s, 12.2 MB/s 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:49.835 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.093 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.094 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:50.094 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.094 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.094 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 118782 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 118782 ']' 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 118782 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118782 00:29:50.352 killing process with pid 118782 00:29:50.352 Received shutdown signal, test time was about 60.000000 seconds 00:29:50.352 00:29:50.352 Latency(us) 00:29:50.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.352 =================================================================================================================== 00:29:50.352 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118782' 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 118782 00:29:50.352 [2024-07-23 15:23:45.683685] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:50.352 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 118782 00:29:50.352 [2024-07-23 15:23:45.734846] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:50.611 ************************************ 00:29:50.611 END TEST raid5f_rebuild_test 00:29:50.611 ************************************ 00:29:50.611 15:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:29:50.611 00:29:50.611 real 0m23.102s 00:29:50.611 user 0m31.445s 00:29:50.611 sys 0m3.529s 00:29:50.611 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.611 15:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.611 15:23:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:50.611 15:23:46 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:29:50.611 15:23:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:50.611 15:23:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.611 15:23:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:50.611 ************************************ 00:29:50.611 START TEST raid5f_rebuild_test_sb 00:29:50.611 ************************************ 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 true false true 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:50.611 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=119356 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 119356 /var/tmp/spdk-raid.sock 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 119356 ']' 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:50.869 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:50.870 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:50.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:50.870 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:50.870 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:50.870 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.870 [2024-07-23 15:23:46.100250] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:29:50.870 [2024-07-23 15:23:46.101030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119356 ] 00:29:50.870 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:50.870 Zero copy mechanism will not be used. 00:29:50.870 [2024-07-23 15:23:46.233499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.870 [2024-07-23 15:23:46.281451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.128 [2024-07-23 15:23:46.326988] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:51.694 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:51.695 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:29:51.695 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:51.695 15:23:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:51.953 BaseBdev1_malloc 00:29:51.953 15:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:51.953 [2024-07-23 15:23:47.378941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:51.953 [2024-07-23 15:23:47.379035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.953 [2024-07-23 15:23:47.379076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:29:51.953 [2024-07-23 15:23:47.379089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.953 [2024-07-23 15:23:47.381654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.953 [2024-07-23 15:23:47.381703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:52.212 BaseBdev1 00:29:52.212 15:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:52.212 15:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:52.212 BaseBdev2_malloc 00:29:52.470 15:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:52.470 [2024-07-23 15:23:47.800657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:52.470 [2024-07-23 15:23:47.800759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.470 [2024-07-23 15:23:47.800793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:29:52.470 [2024-07-23 15:23:47.800821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.470 [2024-07-23 15:23:47.803311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.470 [2024-07-23 15:23:47.803354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:52.470 BaseBdev2 00:29:52.470 15:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:52.470 15:23:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:52.728 BaseBdev3_malloc 00:29:52.728 15:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:52.987 [2024-07-23 15:23:48.175959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:52.987 [2024-07-23 15:23:48.176040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.987 [2024-07-23 15:23:48.176077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280 00:29:52.987 [2024-07-23 15:23:48.176090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.987 [2024-07-23 15:23:48.178554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.987 [2024-07-23 15:23:48.178598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:52.987 BaseBdev3 00:29:52.987 15:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:52.987 15:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:52.987 BaseBdev4_malloc 00:29:52.987 15:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:53.245 [2024-07-23 15:23:48.533579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:53.245 [2024-07-23 15:23:48.533832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:53.245 [2024-07-23 15:23:48.533880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:29:53.245 [2024-07-23 15:23:48.533893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:53.245 [2024-07-23 15:23:48.536410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:53.245 [2024-07-23 15:23:48.536453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:53.245 BaseBdev4 00:29:53.245 15:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:53.503 spare_malloc 00:29:53.503 15:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:53.503 spare_delay 00:29:53.503 15:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:53.760 [2024-07-23 15:23:49.055278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:53.760 [2024-07-23 15:23:49.055399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:53.760 [2024-07-23 15:23:49.055438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:29:53.760 [2024-07-23 15:23:49.055450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:53.760 [2024-07-23 15:23:49.057954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:53.760 [2024-07-23 15:23:49.057997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:53.760 spare 00:29:53.760 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:54.018 [2024-07-23 15:23:49.227414] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:54.018 [2024-07-23 15:23:49.229953] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:54.018 [2024-07-23 15:23:49.230022] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:54.018 [2024-07-23 15:23:49.230065] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:54.018 [2024-07-23 15:23:49.230275] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:29:54.018 [2024-07-23 15:23:49.230295] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:54.018 [2024-07-23 15:23:49.230422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:29:54.018 [2024-07-23 15:23:49.231247] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:29:54.018 [2024-07-23 15:23:49.231382] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:29:54.018 [2024-07-23 15:23:49.231655] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.018 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.276 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:54.276 "name": "raid_bdev1", 00:29:54.276 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:29:54.276 "strip_size_kb": 64, 00:29:54.276 "state": "online", 00:29:54.276 "raid_level": "raid5f", 00:29:54.276 "superblock": true, 00:29:54.276 "num_base_bdevs": 4, 00:29:54.276 "num_base_bdevs_discovered": 4, 00:29:54.276 "num_base_bdevs_operational": 4, 00:29:54.276 "base_bdevs_list": [ 00:29:54.276 { 00:29:54.276 "name": "BaseBdev1", 00:29:54.276 "uuid": "7992bf89-2633-59bb-a543-25c5b6788aa0", 00:29:54.276 "is_configured": true, 00:29:54.276 "data_offset": 2048, 00:29:54.276 "data_size": 63488 00:29:54.276 }, 00:29:54.276 { 00:29:54.276 "name": "BaseBdev2", 00:29:54.276 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:29:54.276 "is_configured": true, 00:29:54.276 "data_offset": 2048, 00:29:54.276 "data_size": 63488 00:29:54.276 }, 00:29:54.276 { 00:29:54.276 "name": "BaseBdev3", 00:29:54.276 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:29:54.276 "is_configured": true, 00:29:54.276 "data_offset": 2048, 00:29:54.276 "data_size": 63488 00:29:54.276 }, 00:29:54.276 { 00:29:54.276 "name": "BaseBdev4", 00:29:54.276 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:29:54.276 "is_configured": true, 00:29:54.276 "data_offset": 2048, 00:29:54.276 "data_size": 63488 00:29:54.276 } 00:29:54.276 ] 00:29:54.276 }' 00:29:54.276 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:54.276 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:54.535 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:54.535 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:54.535 [2024-07-23 15:23:49.900021] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:54.535 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:29:54.535 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:54.535 15:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:54.794 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:55.054 [2024-07-23 15:23:50.331993] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:29:55.054 /dev/nbd0 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.054 1+0 records in 00:29:55.054 1+0 records out 00:29:55.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132928 s, 3.1 MB/s 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:29:55.054 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:29:55.622 496+0 records in 00:29:55.622 496+0 records out 00:29:55.622 97517568 bytes (98 MB, 93 MiB) copied, 0.484347 s, 201 MB/s 00:29:55.622 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:55.622 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:55.622 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:55.622 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:55.622 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:55.622 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.622 15:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:55.880 [2024-07-23 15:23:51.137101] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.880 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:56.138 [2024-07-23 15:23:51.389394] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.138 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.397 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:56.397 "name": "raid_bdev1", 00:29:56.397 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:29:56.397 "strip_size_kb": 64, 00:29:56.397 "state": "online", 00:29:56.397 "raid_level": "raid5f", 00:29:56.397 "superblock": true, 00:29:56.397 "num_base_bdevs": 4, 00:29:56.397 "num_base_bdevs_discovered": 3, 00:29:56.397 "num_base_bdevs_operational": 3, 00:29:56.397 "base_bdevs_list": [ 00:29:56.397 { 00:29:56.397 "name": null, 00:29:56.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.397 "is_configured": false, 00:29:56.397 "data_offset": 2048, 00:29:56.397 "data_size": 63488 00:29:56.397 }, 00:29:56.397 { 00:29:56.397 "name": "BaseBdev2", 00:29:56.397 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:29:56.397 "is_configured": true, 00:29:56.397 "data_offset": 2048, 00:29:56.397 "data_size": 63488 00:29:56.397 }, 00:29:56.397 { 00:29:56.397 "name": "BaseBdev3", 00:29:56.397 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:29:56.397 "is_configured": true, 00:29:56.397 "data_offset": 2048, 00:29:56.397 "data_size": 63488 00:29:56.397 }, 00:29:56.397 { 00:29:56.397 "name": "BaseBdev4", 00:29:56.397 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:29:56.397 "is_configured": true, 00:29:56.397 "data_offset": 2048, 00:29:56.397 "data_size": 63488 00:29:56.397 } 00:29:56.397 ] 00:29:56.397 }' 00:29:56.397 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:56.397 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:56.656 15:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:56.914 [2024-07-23 15:23:52.169575] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:56.914 [2024-07-23 15:23:52.173237] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000026c90 00:29:56.914 [2024-07-23 15:23:52.176153] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:56.914 15:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:57.851 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:57.851 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:57.851 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:57.851 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:57.851 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:57.851 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.851 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.195 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:58.195 "name": "raid_bdev1", 00:29:58.195 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:29:58.195 "strip_size_kb": 64, 00:29:58.195 "state": "online", 00:29:58.195 "raid_level": "raid5f", 00:29:58.195 "superblock": true, 00:29:58.195 "num_base_bdevs": 4, 00:29:58.195 "num_base_bdevs_discovered": 4, 00:29:58.195 "num_base_bdevs_operational": 4, 00:29:58.195 "process": { 00:29:58.195 "type": "rebuild", 00:29:58.195 "target": "spare", 00:29:58.195 "progress": { 00:29:58.195 "blocks": 23040, 00:29:58.195 "percent": 12 00:29:58.195 } 00:29:58.195 }, 00:29:58.195 "base_bdevs_list": [ 00:29:58.195 { 00:29:58.195 "name": "spare", 00:29:58.195 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:29:58.195 "is_configured": true, 00:29:58.195 "data_offset": 2048, 00:29:58.195 "data_size": 63488 00:29:58.195 }, 00:29:58.195 { 00:29:58.195 "name": "BaseBdev2", 00:29:58.195 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:29:58.195 "is_configured": true, 00:29:58.195 "data_offset": 2048, 00:29:58.195 "data_size": 63488 00:29:58.195 }, 00:29:58.195 { 00:29:58.195 "name": "BaseBdev3", 00:29:58.195 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:29:58.195 "is_configured": true, 00:29:58.195 "data_offset": 2048, 00:29:58.195 "data_size": 63488 00:29:58.195 }, 00:29:58.195 { 00:29:58.195 "name": "BaseBdev4", 00:29:58.195 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:29:58.195 "is_configured": true, 00:29:58.195 "data_offset": 2048, 00:29:58.195 "data_size": 63488 00:29:58.195 } 00:29:58.195 ] 00:29:58.195 }' 00:29:58.195 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:58.195 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:58.195 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:58.195 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:58.195 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:58.467 [2024-07-23 15:23:53.617780] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:58.467 [2024-07-23 15:23:53.688520] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:58.467 [2024-07-23 15:23:53.688596] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:58.467 [2024-07-23 15:23:53.688618] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:58.467 [2024-07-23 15:23:53.688628] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.467 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.726 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:58.726 "name": "raid_bdev1", 00:29:58.726 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:29:58.726 "strip_size_kb": 64, 00:29:58.726 "state": "online", 00:29:58.726 "raid_level": "raid5f", 00:29:58.726 "superblock": true, 00:29:58.726 "num_base_bdevs": 4, 00:29:58.726 "num_base_bdevs_discovered": 3, 00:29:58.726 "num_base_bdevs_operational": 3, 00:29:58.726 "base_bdevs_list": [ 00:29:58.726 { 00:29:58.726 "name": null, 00:29:58.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.726 "is_configured": false, 00:29:58.726 "data_offset": 2048, 00:29:58.726 "data_size": 63488 00:29:58.726 }, 00:29:58.726 { 00:29:58.726 "name": "BaseBdev2", 00:29:58.726 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:29:58.726 "is_configured": true, 00:29:58.726 "data_offset": 2048, 00:29:58.726 "data_size": 63488 00:29:58.726 }, 00:29:58.726 { 00:29:58.726 "name": "BaseBdev3", 00:29:58.726 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:29:58.726 "is_configured": true, 00:29:58.726 "data_offset": 2048, 00:29:58.726 "data_size": 63488 00:29:58.726 }, 00:29:58.726 { 00:29:58.726 "name": "BaseBdev4", 00:29:58.726 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:29:58.726 "is_configured": true, 00:29:58.726 "data_offset": 2048, 00:29:58.726 "data_size": 63488 00:29:58.726 } 00:29:58.726 ] 00:29:58.726 }' 00:29:58.726 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:58.726 15:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:58.984 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:58.984 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:58.984 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:58.984 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:58.984 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:58.984 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.984 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.243 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:59.243 "name": "raid_bdev1", 00:29:59.243 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:29:59.243 "strip_size_kb": 64, 00:29:59.243 "state": "online", 00:29:59.243 "raid_level": "raid5f", 00:29:59.243 "superblock": true, 00:29:59.243 "num_base_bdevs": 4, 00:29:59.243 "num_base_bdevs_discovered": 3, 00:29:59.243 "num_base_bdevs_operational": 3, 00:29:59.243 "base_bdevs_list": [ 00:29:59.243 { 00:29:59.243 "name": null, 00:29:59.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.243 "is_configured": false, 00:29:59.243 "data_offset": 2048, 00:29:59.243 "data_size": 63488 00:29:59.243 }, 00:29:59.243 { 00:29:59.243 "name": "BaseBdev2", 00:29:59.243 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:29:59.243 "is_configured": true, 00:29:59.243 "data_offset": 2048, 00:29:59.243 "data_size": 63488 00:29:59.243 }, 00:29:59.243 { 00:29:59.243 "name": "BaseBdev3", 00:29:59.243 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:29:59.243 "is_configured": true, 00:29:59.243 "data_offset": 2048, 00:29:59.243 "data_size": 63488 00:29:59.243 }, 00:29:59.243 { 00:29:59.243 "name": "BaseBdev4", 00:29:59.243 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:29:59.243 "is_configured": true, 00:29:59.243 "data_offset": 2048, 00:29:59.243 "data_size": 63488 00:29:59.243 } 00:29:59.243 ] 00:29:59.243 }' 00:29:59.243 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:59.243 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:59.243 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:59.243 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:59.243 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:59.501 [2024-07-23 15:23:54.679101] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:59.501 [2024-07-23 15:23:54.682711] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000026d60 00:29:59.501 [2024-07-23 15:23:54.685243] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:59.501 15:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:00.436 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:00.436 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:00.436 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:00.436 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:00.436 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:00.436 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.436 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:00.695 "name": "raid_bdev1", 00:30:00.695 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:00.695 "strip_size_kb": 64, 00:30:00.695 "state": "online", 00:30:00.695 "raid_level": "raid5f", 00:30:00.695 "superblock": true, 00:30:00.695 "num_base_bdevs": 4, 00:30:00.695 "num_base_bdevs_discovered": 4, 00:30:00.695 "num_base_bdevs_operational": 4, 00:30:00.695 "process": { 00:30:00.695 "type": "rebuild", 00:30:00.695 "target": "spare", 00:30:00.695 "progress": { 00:30:00.695 "blocks": 23040, 00:30:00.695 "percent": 12 00:30:00.695 } 00:30:00.695 }, 00:30:00.695 "base_bdevs_list": [ 00:30:00.695 { 00:30:00.695 "name": "spare", 00:30:00.695 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:00.695 "is_configured": true, 00:30:00.695 "data_offset": 2048, 00:30:00.695 "data_size": 63488 00:30:00.695 }, 00:30:00.695 { 00:30:00.695 "name": "BaseBdev2", 00:30:00.695 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:00.695 "is_configured": true, 00:30:00.695 "data_offset": 2048, 00:30:00.695 "data_size": 63488 00:30:00.695 }, 00:30:00.695 { 00:30:00.695 "name": "BaseBdev3", 00:30:00.695 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:00.695 "is_configured": true, 00:30:00.695 "data_offset": 2048, 00:30:00.695 "data_size": 63488 00:30:00.695 }, 00:30:00.695 { 00:30:00.695 "name": "BaseBdev4", 00:30:00.695 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:00.695 "is_configured": true, 00:30:00.695 "data_offset": 2048, 00:30:00.695 "data_size": 63488 00:30:00.695 } 00:30:00.695 ] 00:30:00.695 }' 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:00.695 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=955 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.695 15:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.953 15:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:00.953 "name": "raid_bdev1", 00:30:00.953 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:00.953 "strip_size_kb": 64, 00:30:00.953 "state": "online", 00:30:00.953 "raid_level": "raid5f", 00:30:00.953 "superblock": true, 00:30:00.953 "num_base_bdevs": 4, 00:30:00.953 "num_base_bdevs_discovered": 4, 00:30:00.953 "num_base_bdevs_operational": 4, 00:30:00.953 "process": { 00:30:00.953 "type": "rebuild", 00:30:00.953 "target": "spare", 00:30:00.953 "progress": { 00:30:00.953 "blocks": 26880, 00:30:00.953 "percent": 14 00:30:00.953 } 00:30:00.953 }, 00:30:00.953 "base_bdevs_list": [ 00:30:00.953 { 00:30:00.953 "name": "spare", 00:30:00.953 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:00.953 "is_configured": true, 00:30:00.953 "data_offset": 2048, 00:30:00.953 "data_size": 63488 00:30:00.953 }, 00:30:00.953 { 00:30:00.953 "name": "BaseBdev2", 00:30:00.953 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:00.953 "is_configured": true, 00:30:00.953 "data_offset": 2048, 00:30:00.953 "data_size": 63488 00:30:00.953 }, 00:30:00.953 { 00:30:00.953 "name": "BaseBdev3", 00:30:00.954 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:00.954 "is_configured": true, 00:30:00.954 "data_offset": 2048, 00:30:00.954 "data_size": 63488 00:30:00.954 }, 00:30:00.954 { 00:30:00.954 "name": "BaseBdev4", 00:30:00.954 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:00.954 "is_configured": true, 00:30:00.954 "data_offset": 2048, 00:30:00.954 "data_size": 63488 00:30:00.954 } 00:30:00.954 ] 00:30:00.954 }' 00:30:00.954 15:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:00.954 15:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:00.954 15:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:00.954 15:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:00.954 15:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:01.889 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:01.889 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:01.889 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:01.889 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:01.889 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:01.889 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:01.889 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.889 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.147 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:02.147 "name": "raid_bdev1", 00:30:02.147 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:02.147 "strip_size_kb": 64, 00:30:02.147 "state": "online", 00:30:02.147 "raid_level": "raid5f", 00:30:02.147 "superblock": true, 00:30:02.147 "num_base_bdevs": 4, 00:30:02.147 "num_base_bdevs_discovered": 4, 00:30:02.147 "num_base_bdevs_operational": 4, 00:30:02.147 "process": { 00:30:02.147 "type": "rebuild", 00:30:02.147 "target": "spare", 00:30:02.147 "progress": { 00:30:02.147 "blocks": 51840, 00:30:02.147 "percent": 27 00:30:02.147 } 00:30:02.147 }, 00:30:02.147 "base_bdevs_list": [ 00:30:02.147 { 00:30:02.147 "name": "spare", 00:30:02.147 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:02.147 "is_configured": true, 00:30:02.147 "data_offset": 2048, 00:30:02.147 "data_size": 63488 00:30:02.147 }, 00:30:02.147 { 00:30:02.147 "name": "BaseBdev2", 00:30:02.147 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:02.147 "is_configured": true, 00:30:02.147 "data_offset": 2048, 00:30:02.147 "data_size": 63488 00:30:02.147 }, 00:30:02.147 { 00:30:02.147 "name": "BaseBdev3", 00:30:02.147 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:02.147 "is_configured": true, 00:30:02.147 "data_offset": 2048, 00:30:02.147 "data_size": 63488 00:30:02.147 }, 00:30:02.147 { 00:30:02.147 "name": "BaseBdev4", 00:30:02.147 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:02.147 "is_configured": true, 00:30:02.147 "data_offset": 2048, 00:30:02.147 "data_size": 63488 00:30:02.147 } 00:30:02.147 ] 00:30:02.147 }' 00:30:02.147 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:02.147 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:02.147 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:02.147 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:02.147 15:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:03.080 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:03.080 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:03.080 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:03.080 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:03.080 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:03.080 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:03.080 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.080 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.339 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:03.339 "name": "raid_bdev1", 00:30:03.339 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:03.339 "strip_size_kb": 64, 00:30:03.339 "state": "online", 00:30:03.339 "raid_level": "raid5f", 00:30:03.339 "superblock": true, 00:30:03.339 "num_base_bdevs": 4, 00:30:03.339 "num_base_bdevs_discovered": 4, 00:30:03.339 "num_base_bdevs_operational": 4, 00:30:03.339 "process": { 00:30:03.339 "type": "rebuild", 00:30:03.339 "target": "spare", 00:30:03.339 "progress": { 00:30:03.339 "blocks": 74880, 00:30:03.339 "percent": 39 00:30:03.339 } 00:30:03.339 }, 00:30:03.339 "base_bdevs_list": [ 00:30:03.339 { 00:30:03.339 "name": "spare", 00:30:03.339 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:03.339 "is_configured": true, 00:30:03.339 "data_offset": 2048, 00:30:03.339 "data_size": 63488 00:30:03.339 }, 00:30:03.339 { 00:30:03.339 "name": "BaseBdev2", 00:30:03.339 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:03.339 "is_configured": true, 00:30:03.339 "data_offset": 2048, 00:30:03.339 "data_size": 63488 00:30:03.339 }, 00:30:03.339 { 00:30:03.339 "name": "BaseBdev3", 00:30:03.339 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:03.339 "is_configured": true, 00:30:03.339 "data_offset": 2048, 00:30:03.339 "data_size": 63488 00:30:03.339 }, 00:30:03.339 { 00:30:03.339 "name": "BaseBdev4", 00:30:03.339 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:03.339 "is_configured": true, 00:30:03.339 "data_offset": 2048, 00:30:03.339 "data_size": 63488 00:30:03.339 } 00:30:03.339 ] 00:30:03.339 }' 00:30:03.339 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:03.339 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:03.339 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:03.339 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:03.339 15:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:04.713 "name": "raid_bdev1", 00:30:04.713 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:04.713 "strip_size_kb": 64, 00:30:04.713 "state": "online", 00:30:04.713 "raid_level": "raid5f", 00:30:04.713 "superblock": true, 00:30:04.713 "num_base_bdevs": 4, 00:30:04.713 "num_base_bdevs_discovered": 4, 00:30:04.713 "num_base_bdevs_operational": 4, 00:30:04.713 "process": { 00:30:04.713 "type": "rebuild", 00:30:04.713 "target": "spare", 00:30:04.713 "progress": { 00:30:04.713 "blocks": 99840, 00:30:04.713 "percent": 52 00:30:04.713 } 00:30:04.713 }, 00:30:04.713 "base_bdevs_list": [ 00:30:04.713 { 00:30:04.713 "name": "spare", 00:30:04.713 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:04.713 "is_configured": true, 00:30:04.713 "data_offset": 2048, 00:30:04.713 "data_size": 63488 00:30:04.713 }, 00:30:04.713 { 00:30:04.713 "name": "BaseBdev2", 00:30:04.713 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:04.713 "is_configured": true, 00:30:04.713 "data_offset": 2048, 00:30:04.713 "data_size": 63488 00:30:04.713 }, 00:30:04.713 { 00:30:04.713 "name": "BaseBdev3", 00:30:04.713 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:04.713 "is_configured": true, 00:30:04.713 "data_offset": 2048, 00:30:04.713 "data_size": 63488 00:30:04.713 }, 00:30:04.713 { 00:30:04.713 "name": "BaseBdev4", 00:30:04.713 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:04.713 "is_configured": true, 00:30:04.713 "data_offset": 2048, 00:30:04.713 "data_size": 63488 00:30:04.713 } 00:30:04.713 ] 00:30:04.713 }' 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:04.713 15:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:05.647 15:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:05.647 15:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.647 15:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:05.647 15:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:05.647 15:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:05.647 15:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:05.647 15:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.647 15:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.905 15:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:05.905 "name": "raid_bdev1", 00:30:05.905 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:05.905 "strip_size_kb": 64, 00:30:05.905 "state": "online", 00:30:05.905 "raid_level": "raid5f", 00:30:05.905 "superblock": true, 00:30:05.905 "num_base_bdevs": 4, 00:30:05.905 "num_base_bdevs_discovered": 4, 00:30:05.905 "num_base_bdevs_operational": 4, 00:30:05.905 "process": { 00:30:05.905 "type": "rebuild", 00:30:05.905 "target": "spare", 00:30:05.905 "progress": { 00:30:05.905 "blocks": 122880, 00:30:05.905 "percent": 64 00:30:05.905 } 00:30:05.905 }, 00:30:05.905 "base_bdevs_list": [ 00:30:05.905 { 00:30:05.905 "name": "spare", 00:30:05.905 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:05.906 "is_configured": true, 00:30:05.906 "data_offset": 2048, 00:30:05.906 "data_size": 63488 00:30:05.906 }, 00:30:05.906 { 00:30:05.906 "name": "BaseBdev2", 00:30:05.906 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:05.906 "is_configured": true, 00:30:05.906 "data_offset": 2048, 00:30:05.906 "data_size": 63488 00:30:05.906 }, 00:30:05.906 { 00:30:05.906 "name": "BaseBdev3", 00:30:05.906 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:05.906 "is_configured": true, 00:30:05.906 "data_offset": 2048, 00:30:05.906 "data_size": 63488 00:30:05.906 }, 00:30:05.906 { 00:30:05.906 "name": "BaseBdev4", 00:30:05.906 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:05.906 "is_configured": true, 00:30:05.906 "data_offset": 2048, 00:30:05.906 "data_size": 63488 00:30:05.906 } 00:30:05.906 ] 00:30:05.906 }' 00:30:05.906 15:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:05.906 15:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.906 15:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:05.906 15:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:05.906 15:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:06.840 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:06.840 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.840 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:06.840 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:06.840 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:06.840 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:06.840 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.840 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.098 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:07.098 "name": "raid_bdev1", 00:30:07.098 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:07.098 "strip_size_kb": 64, 00:30:07.098 "state": "online", 00:30:07.098 "raid_level": "raid5f", 00:30:07.099 "superblock": true, 00:30:07.099 "num_base_bdevs": 4, 00:30:07.099 "num_base_bdevs_discovered": 4, 00:30:07.099 "num_base_bdevs_operational": 4, 00:30:07.099 "process": { 00:30:07.099 "type": "rebuild", 00:30:07.099 "target": "spare", 00:30:07.099 "progress": { 00:30:07.099 "blocks": 147840, 00:30:07.099 "percent": 77 00:30:07.099 } 00:30:07.099 }, 00:30:07.099 "base_bdevs_list": [ 00:30:07.099 { 00:30:07.099 "name": "spare", 00:30:07.099 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:07.099 "is_configured": true, 00:30:07.099 "data_offset": 2048, 00:30:07.099 "data_size": 63488 00:30:07.099 }, 00:30:07.099 { 00:30:07.099 "name": "BaseBdev2", 00:30:07.099 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:07.099 "is_configured": true, 00:30:07.099 "data_offset": 2048, 00:30:07.099 "data_size": 63488 00:30:07.099 }, 00:30:07.099 { 00:30:07.099 "name": "BaseBdev3", 00:30:07.099 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:07.099 "is_configured": true, 00:30:07.099 "data_offset": 2048, 00:30:07.099 "data_size": 63488 00:30:07.099 }, 00:30:07.099 { 00:30:07.099 "name": "BaseBdev4", 00:30:07.099 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:07.099 "is_configured": true, 00:30:07.099 "data_offset": 2048, 00:30:07.099 "data_size": 63488 00:30:07.099 } 00:30:07.099 ] 00:30:07.099 }' 00:30:07.099 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:07.099 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:07.099 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:07.099 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:07.099 15:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:08.082 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:08.082 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:08.082 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:08.082 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:08.082 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:08.082 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:08.082 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.082 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.352 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:08.352 "name": "raid_bdev1", 00:30:08.352 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:08.352 "strip_size_kb": 64, 00:30:08.352 "state": "online", 00:30:08.352 "raid_level": "raid5f", 00:30:08.352 "superblock": true, 00:30:08.352 "num_base_bdevs": 4, 00:30:08.352 "num_base_bdevs_discovered": 4, 00:30:08.352 "num_base_bdevs_operational": 4, 00:30:08.352 "process": { 00:30:08.352 "type": "rebuild", 00:30:08.352 "target": "spare", 00:30:08.352 "progress": { 00:30:08.352 "blocks": 170880, 00:30:08.352 "percent": 89 00:30:08.352 } 00:30:08.352 }, 00:30:08.352 "base_bdevs_list": [ 00:30:08.352 { 00:30:08.352 "name": "spare", 00:30:08.352 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:08.352 "is_configured": true, 00:30:08.352 "data_offset": 2048, 00:30:08.352 "data_size": 63488 00:30:08.352 }, 00:30:08.352 { 00:30:08.352 "name": "BaseBdev2", 00:30:08.352 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:08.352 "is_configured": true, 00:30:08.352 "data_offset": 2048, 00:30:08.352 "data_size": 63488 00:30:08.352 }, 00:30:08.352 { 00:30:08.352 "name": "BaseBdev3", 00:30:08.352 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:08.352 "is_configured": true, 00:30:08.352 "data_offset": 2048, 00:30:08.352 "data_size": 63488 00:30:08.352 }, 00:30:08.352 { 00:30:08.352 "name": "BaseBdev4", 00:30:08.352 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:08.352 "is_configured": true, 00:30:08.352 "data_offset": 2048, 00:30:08.352 "data_size": 63488 00:30:08.352 } 00:30:08.352 ] 00:30:08.352 }' 00:30:08.352 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:08.352 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:08.352 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:08.352 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:08.352 15:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:09.726 [2024-07-23 15:24:04.759907] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:09.727 [2024-07-23 15:24:04.760015] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:09.727 [2024-07-23 15:24:04.760171] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:09.727 15:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:09.727 15:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.727 15:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:09.727 15:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:09.727 15:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:09.727 15:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:09.727 15:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.727 15:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.727 "name": "raid_bdev1", 00:30:09.727 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:09.727 "strip_size_kb": 64, 00:30:09.727 "state": "online", 00:30:09.727 "raid_level": "raid5f", 00:30:09.727 "superblock": true, 00:30:09.727 "num_base_bdevs": 4, 00:30:09.727 "num_base_bdevs_discovered": 4, 00:30:09.727 "num_base_bdevs_operational": 4, 00:30:09.727 "base_bdevs_list": [ 00:30:09.727 { 00:30:09.727 "name": "spare", 00:30:09.727 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:09.727 "is_configured": true, 00:30:09.727 "data_offset": 2048, 00:30:09.727 "data_size": 63488 00:30:09.727 }, 00:30:09.727 { 00:30:09.727 "name": "BaseBdev2", 00:30:09.727 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:09.727 "is_configured": true, 00:30:09.727 "data_offset": 2048, 00:30:09.727 "data_size": 63488 00:30:09.727 }, 00:30:09.727 { 00:30:09.727 "name": "BaseBdev3", 00:30:09.727 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:09.727 "is_configured": true, 00:30:09.727 "data_offset": 2048, 00:30:09.727 "data_size": 63488 00:30:09.727 }, 00:30:09.727 { 00:30:09.727 "name": "BaseBdev4", 00:30:09.727 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:09.727 "is_configured": true, 00:30:09.727 "data_offset": 2048, 00:30:09.727 "data_size": 63488 00:30:09.727 } 00:30:09.727 ] 00:30:09.727 }' 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.727 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.985 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.985 "name": "raid_bdev1", 00:30:09.985 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:09.985 "strip_size_kb": 64, 00:30:09.985 "state": "online", 00:30:09.985 "raid_level": "raid5f", 00:30:09.985 "superblock": true, 00:30:09.985 "num_base_bdevs": 4, 00:30:09.985 "num_base_bdevs_discovered": 4, 00:30:09.985 "num_base_bdevs_operational": 4, 00:30:09.985 "base_bdevs_list": [ 00:30:09.985 { 00:30:09.985 "name": "spare", 00:30:09.985 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:09.985 "is_configured": true, 00:30:09.985 "data_offset": 2048, 00:30:09.985 "data_size": 63488 00:30:09.985 }, 00:30:09.985 { 00:30:09.985 "name": "BaseBdev2", 00:30:09.985 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:09.985 "is_configured": true, 00:30:09.985 "data_offset": 2048, 00:30:09.985 "data_size": 63488 00:30:09.985 }, 00:30:09.985 { 00:30:09.985 "name": "BaseBdev3", 00:30:09.985 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:09.985 "is_configured": true, 00:30:09.985 "data_offset": 2048, 00:30:09.985 "data_size": 63488 00:30:09.985 }, 00:30:09.985 { 00:30:09.985 "name": "BaseBdev4", 00:30:09.985 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:09.985 "is_configured": true, 00:30:09.985 "data_offset": 2048, 00:30:09.985 "data_size": 63488 00:30:09.985 } 00:30:09.985 ] 00:30:09.985 }' 00:30:09.985 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.985 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:09.985 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:09.985 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:09.985 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:09.985 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.986 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.244 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:10.244 "name": "raid_bdev1", 00:30:10.244 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:10.244 "strip_size_kb": 64, 00:30:10.244 "state": "online", 00:30:10.244 "raid_level": "raid5f", 00:30:10.244 "superblock": true, 00:30:10.244 "num_base_bdevs": 4, 00:30:10.244 "num_base_bdevs_discovered": 4, 00:30:10.244 "num_base_bdevs_operational": 4, 00:30:10.244 "base_bdevs_list": [ 00:30:10.244 { 00:30:10.244 "name": "spare", 00:30:10.244 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:10.244 "is_configured": true, 00:30:10.244 "data_offset": 2048, 00:30:10.244 "data_size": 63488 00:30:10.244 }, 00:30:10.244 { 00:30:10.244 "name": "BaseBdev2", 00:30:10.244 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:10.244 "is_configured": true, 00:30:10.244 "data_offset": 2048, 00:30:10.244 "data_size": 63488 00:30:10.244 }, 00:30:10.244 { 00:30:10.244 "name": "BaseBdev3", 00:30:10.244 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:10.244 "is_configured": true, 00:30:10.244 "data_offset": 2048, 00:30:10.244 "data_size": 63488 00:30:10.244 }, 00:30:10.244 { 00:30:10.244 "name": "BaseBdev4", 00:30:10.244 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:10.244 "is_configured": true, 00:30:10.244 "data_offset": 2048, 00:30:10.244 "data_size": 63488 00:30:10.244 } 00:30:10.244 ] 00:30:10.244 }' 00:30:10.244 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:10.244 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.503 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:10.503 [2024-07-23 15:24:05.918230] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:10.503 [2024-07-23 15:24:05.918272] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:10.503 [2024-07-23 15:24:05.918375] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:10.503 [2024-07-23 15:24:05.918472] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:10.503 [2024-07-23 15:24:05.918490] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:30:10.761 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:30:10.761 15:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.019 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:11.277 /dev/nbd0 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:11.277 1+0 records in 00:30:11.277 1+0 records out 00:30:11.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176594 s, 23.2 MB/s 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.277 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:11.535 /dev/nbd1 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:11.535 1+0 records in 00:30:11.535 1+0 records out 00:30:11.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028298 s, 14.5 MB/s 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:11.535 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:11.793 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:12.052 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:12.311 [2024-07-23 15:24:07.699612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:12.311 [2024-07-23 15:24:07.699689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.311 [2024-07-23 15:24:07.699724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:30:12.311 [2024-07-23 15:24:07.699739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.311 [2024-07-23 15:24:07.702270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.311 [2024-07-23 15:24:07.702316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:12.311 [2024-07-23 15:24:07.702401] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:12.311 [2024-07-23 15:24:07.702467] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:12.311 [2024-07-23 15:24:07.702628] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:12.311 [2024-07-23 15:24:07.702724] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:12.311 [2024-07-23 15:24:07.702782] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:12.311 spare 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.311 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.569 [2024-07-23 15:24:07.802906] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:30:12.569 [2024-07-23 15:24:07.802966] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:12.569 [2024-07-23 15:24:07.803129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000045410 00:30:12.569 [2024-07-23 15:24:07.803887] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:30:12.569 [2024-07-23 15:24:07.803917] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:30:12.569 [2024-07-23 15:24:07.804071] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.569 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:12.569 "name": "raid_bdev1", 00:30:12.569 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:12.569 "strip_size_kb": 64, 00:30:12.569 "state": "online", 00:30:12.569 "raid_level": "raid5f", 00:30:12.569 "superblock": true, 00:30:12.569 "num_base_bdevs": 4, 00:30:12.569 "num_base_bdevs_discovered": 4, 00:30:12.569 "num_base_bdevs_operational": 4, 00:30:12.569 "base_bdevs_list": [ 00:30:12.569 { 00:30:12.569 "name": "spare", 00:30:12.569 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:12.569 "is_configured": true, 00:30:12.569 "data_offset": 2048, 00:30:12.569 "data_size": 63488 00:30:12.569 }, 00:30:12.569 { 00:30:12.569 "name": "BaseBdev2", 00:30:12.569 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:12.569 "is_configured": true, 00:30:12.569 "data_offset": 2048, 00:30:12.570 "data_size": 63488 00:30:12.570 }, 00:30:12.570 { 00:30:12.570 "name": "BaseBdev3", 00:30:12.570 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:12.570 "is_configured": true, 00:30:12.570 "data_offset": 2048, 00:30:12.570 "data_size": 63488 00:30:12.570 }, 00:30:12.570 { 00:30:12.570 "name": "BaseBdev4", 00:30:12.570 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:12.570 "is_configured": true, 00:30:12.570 "data_offset": 2048, 00:30:12.570 "data_size": 63488 00:30:12.570 } 00:30:12.570 ] 00:30:12.570 }' 00:30:12.570 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:12.570 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.828 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:12.828 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:12.828 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:12.828 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:12.828 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:12.828 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.828 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.087 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:13.087 "name": "raid_bdev1", 00:30:13.087 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:13.087 "strip_size_kb": 64, 00:30:13.087 "state": "online", 00:30:13.087 "raid_level": "raid5f", 00:30:13.087 "superblock": true, 00:30:13.087 "num_base_bdevs": 4, 00:30:13.087 "num_base_bdevs_discovered": 4, 00:30:13.087 "num_base_bdevs_operational": 4, 00:30:13.087 "base_bdevs_list": [ 00:30:13.087 { 00:30:13.087 "name": "spare", 00:30:13.087 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:13.087 "is_configured": true, 00:30:13.087 "data_offset": 2048, 00:30:13.087 "data_size": 63488 00:30:13.087 }, 00:30:13.087 { 00:30:13.087 "name": "BaseBdev2", 00:30:13.087 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:13.087 "is_configured": true, 00:30:13.087 "data_offset": 2048, 00:30:13.087 "data_size": 63488 00:30:13.087 }, 00:30:13.087 { 00:30:13.087 "name": "BaseBdev3", 00:30:13.087 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:13.087 "is_configured": true, 00:30:13.087 "data_offset": 2048, 00:30:13.087 "data_size": 63488 00:30:13.087 }, 00:30:13.087 { 00:30:13.087 "name": "BaseBdev4", 00:30:13.087 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:13.087 "is_configured": true, 00:30:13.087 "data_offset": 2048, 00:30:13.087 "data_size": 63488 00:30:13.087 } 00:30:13.087 ] 00:30:13.087 }' 00:30:13.087 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:13.087 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:13.087 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:13.345 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:13.345 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.345 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:13.604 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:13.604 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:13.604 [2024-07-23 15:24:09.028463] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:13.863 "name": "raid_bdev1", 00:30:13.863 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:13.863 "strip_size_kb": 64, 00:30:13.863 "state": "online", 00:30:13.863 "raid_level": "raid5f", 00:30:13.863 "superblock": true, 00:30:13.863 "num_base_bdevs": 4, 00:30:13.863 "num_base_bdevs_discovered": 3, 00:30:13.863 "num_base_bdevs_operational": 3, 00:30:13.863 "base_bdevs_list": [ 00:30:13.863 { 00:30:13.863 "name": null, 00:30:13.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.863 "is_configured": false, 00:30:13.863 "data_offset": 2048, 00:30:13.863 "data_size": 63488 00:30:13.863 }, 00:30:13.863 { 00:30:13.863 "name": "BaseBdev2", 00:30:13.863 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:13.863 "is_configured": true, 00:30:13.863 "data_offset": 2048, 00:30:13.863 "data_size": 63488 00:30:13.863 }, 00:30:13.863 { 00:30:13.863 "name": "BaseBdev3", 00:30:13.863 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:13.863 "is_configured": true, 00:30:13.863 "data_offset": 2048, 00:30:13.863 "data_size": 63488 00:30:13.863 }, 00:30:13.863 { 00:30:13.863 "name": "BaseBdev4", 00:30:13.863 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:13.863 "is_configured": true, 00:30:13.863 "data_offset": 2048, 00:30:13.863 "data_size": 63488 00:30:13.863 } 00:30:13.863 ] 00:30:13.863 }' 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:13.863 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.430 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:14.430 [2024-07-23 15:24:09.796627] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:14.430 [2024-07-23 15:24:09.796861] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:14.430 [2024-07-23 15:24:09.796884] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:14.430 [2024-07-23 15:24:09.796928] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:14.430 [2024-07-23 15:24:09.800349] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000454e0 00:30:14.430 [2024-07-23 15:24:09.802903] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:14.430 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:15.806 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:15.806 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:15.806 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:15.806 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:15.806 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:15.806 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.806 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.806 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:15.806 "name": "raid_bdev1", 00:30:15.807 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:15.807 "strip_size_kb": 64, 00:30:15.807 "state": "online", 00:30:15.807 "raid_level": "raid5f", 00:30:15.807 "superblock": true, 00:30:15.807 "num_base_bdevs": 4, 00:30:15.807 "num_base_bdevs_discovered": 4, 00:30:15.807 "num_base_bdevs_operational": 4, 00:30:15.807 "process": { 00:30:15.807 "type": "rebuild", 00:30:15.807 "target": "spare", 00:30:15.807 "progress": { 00:30:15.807 "blocks": 23040, 00:30:15.807 "percent": 12 00:30:15.807 } 00:30:15.807 }, 00:30:15.807 "base_bdevs_list": [ 00:30:15.807 { 00:30:15.807 "name": "spare", 00:30:15.807 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:15.807 "is_configured": true, 00:30:15.807 "data_offset": 2048, 00:30:15.807 "data_size": 63488 00:30:15.807 }, 00:30:15.807 { 00:30:15.807 "name": "BaseBdev2", 00:30:15.807 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:15.807 "is_configured": true, 00:30:15.807 "data_offset": 2048, 00:30:15.807 "data_size": 63488 00:30:15.807 }, 00:30:15.807 { 00:30:15.807 "name": "BaseBdev3", 00:30:15.807 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:15.807 "is_configured": true, 00:30:15.807 "data_offset": 2048, 00:30:15.807 "data_size": 63488 00:30:15.807 }, 00:30:15.807 { 00:30:15.807 "name": "BaseBdev4", 00:30:15.807 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:15.807 "is_configured": true, 00:30:15.807 "data_offset": 2048, 00:30:15.807 "data_size": 63488 00:30:15.807 } 00:30:15.807 ] 00:30:15.807 }' 00:30:15.807 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:15.807 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:15.807 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:15.807 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:15.807 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:16.067 [2024-07-23 15:24:11.276457] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:16.067 [2024-07-23 15:24:11.314601] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:16.067 [2024-07-23 15:24:11.314670] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.067 [2024-07-23 15:24:11.314696] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:16.067 [2024-07-23 15:24:11.314705] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.067 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.326 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:16.326 "name": "raid_bdev1", 00:30:16.326 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:16.326 "strip_size_kb": 64, 00:30:16.326 "state": "online", 00:30:16.326 "raid_level": "raid5f", 00:30:16.326 "superblock": true, 00:30:16.326 "num_base_bdevs": 4, 00:30:16.326 "num_base_bdevs_discovered": 3, 00:30:16.326 "num_base_bdevs_operational": 3, 00:30:16.326 "base_bdevs_list": [ 00:30:16.326 { 00:30:16.326 "name": null, 00:30:16.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:16.326 "is_configured": false, 00:30:16.326 "data_offset": 2048, 00:30:16.326 "data_size": 63488 00:30:16.326 }, 00:30:16.326 { 00:30:16.326 "name": "BaseBdev2", 00:30:16.326 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:16.326 "is_configured": true, 00:30:16.326 "data_offset": 2048, 00:30:16.326 "data_size": 63488 00:30:16.326 }, 00:30:16.326 { 00:30:16.326 "name": "BaseBdev3", 00:30:16.326 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:16.326 "is_configured": true, 00:30:16.326 "data_offset": 2048, 00:30:16.326 "data_size": 63488 00:30:16.326 }, 00:30:16.326 { 00:30:16.326 "name": "BaseBdev4", 00:30:16.326 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:16.326 "is_configured": true, 00:30:16.326 "data_offset": 2048, 00:30:16.326 "data_size": 63488 00:30:16.326 } 00:30:16.326 ] 00:30:16.326 }' 00:30:16.326 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:16.326 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.584 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:16.842 [2024-07-23 15:24:12.032628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:16.842 [2024-07-23 15:24:12.032720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.842 [2024-07-23 15:24:12.032755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:30:16.842 [2024-07-23 15:24:12.032769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.842 [2024-07-23 15:24:12.033252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.842 [2024-07-23 15:24:12.033290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:16.843 [2024-07-23 15:24:12.033376] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:16.843 [2024-07-23 15:24:12.033390] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:16.843 [2024-07-23 15:24:12.033416] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:16.843 [2024-07-23 15:24:12.033443] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:16.843 [2024-07-23 15:24:12.036918] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000455b0 00:30:16.843 spare 00:30:16.843 [2024-07-23 15:24:12.039543] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:16.843 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:17.778 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:17.778 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:17.778 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:17.778 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:17.778 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:17.778 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.778 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.037 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:18.037 "name": "raid_bdev1", 00:30:18.037 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:18.037 "strip_size_kb": 64, 00:30:18.037 "state": "online", 00:30:18.037 "raid_level": "raid5f", 00:30:18.037 "superblock": true, 00:30:18.037 "num_base_bdevs": 4, 00:30:18.037 "num_base_bdevs_discovered": 4, 00:30:18.037 "num_base_bdevs_operational": 4, 00:30:18.037 "process": { 00:30:18.037 "type": "rebuild", 00:30:18.037 "target": "spare", 00:30:18.037 "progress": { 00:30:18.037 "blocks": 21120, 00:30:18.037 "percent": 11 00:30:18.037 } 00:30:18.037 }, 00:30:18.037 "base_bdevs_list": [ 00:30:18.037 { 00:30:18.037 "name": "spare", 00:30:18.037 "uuid": "57ba0f51-99f0-5a0a-9497-6e574f6bbec0", 00:30:18.037 "is_configured": true, 00:30:18.037 "data_offset": 2048, 00:30:18.037 "data_size": 63488 00:30:18.037 }, 00:30:18.037 { 00:30:18.037 "name": "BaseBdev2", 00:30:18.037 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:18.037 "is_configured": true, 00:30:18.037 "data_offset": 2048, 00:30:18.037 "data_size": 63488 00:30:18.037 }, 00:30:18.037 { 00:30:18.037 "name": "BaseBdev3", 00:30:18.037 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:18.037 "is_configured": true, 00:30:18.037 "data_offset": 2048, 00:30:18.037 "data_size": 63488 00:30:18.037 }, 00:30:18.037 { 00:30:18.037 "name": "BaseBdev4", 00:30:18.037 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:18.037 "is_configured": true, 00:30:18.037 "data_offset": 2048, 00:30:18.037 "data_size": 63488 00:30:18.037 } 00:30:18.037 ] 00:30:18.037 }' 00:30:18.037 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:18.037 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:18.037 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:18.037 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:18.038 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:18.038 [2024-07-23 15:24:13.430364] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:18.038 [2024-07-23 15:24:13.449880] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:18.038 [2024-07-23 15:24:13.449954] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:18.038 [2024-07-23 15:24:13.449973] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:18.038 [2024-07-23 15:24:13.449985] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.297 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.555 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:18.555 "name": "raid_bdev1", 00:30:18.555 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:18.555 "strip_size_kb": 64, 00:30:18.555 "state": "online", 00:30:18.555 "raid_level": "raid5f", 00:30:18.555 "superblock": true, 00:30:18.555 "num_base_bdevs": 4, 00:30:18.555 "num_base_bdevs_discovered": 3, 00:30:18.555 "num_base_bdevs_operational": 3, 00:30:18.555 "base_bdevs_list": [ 00:30:18.555 { 00:30:18.555 "name": null, 00:30:18.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.556 "is_configured": false, 00:30:18.556 "data_offset": 2048, 00:30:18.556 "data_size": 63488 00:30:18.556 }, 00:30:18.556 { 00:30:18.556 "name": "BaseBdev2", 00:30:18.556 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:18.556 "is_configured": true, 00:30:18.556 "data_offset": 2048, 00:30:18.556 "data_size": 63488 00:30:18.556 }, 00:30:18.556 { 00:30:18.556 "name": "BaseBdev3", 00:30:18.556 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:18.556 "is_configured": true, 00:30:18.556 "data_offset": 2048, 00:30:18.556 "data_size": 63488 00:30:18.556 }, 00:30:18.556 { 00:30:18.556 "name": "BaseBdev4", 00:30:18.556 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:18.556 "is_configured": true, 00:30:18.556 "data_offset": 2048, 00:30:18.556 "data_size": 63488 00:30:18.556 } 00:30:18.556 ] 00:30:18.556 }' 00:30:18.556 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:18.556 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:18.814 "name": "raid_bdev1", 00:30:18.814 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:18.814 "strip_size_kb": 64, 00:30:18.814 "state": "online", 00:30:18.814 "raid_level": "raid5f", 00:30:18.814 "superblock": true, 00:30:18.814 "num_base_bdevs": 4, 00:30:18.814 "num_base_bdevs_discovered": 3, 00:30:18.814 "num_base_bdevs_operational": 3, 00:30:18.814 "base_bdevs_list": [ 00:30:18.814 { 00:30:18.814 "name": null, 00:30:18.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.814 "is_configured": false, 00:30:18.814 "data_offset": 2048, 00:30:18.814 "data_size": 63488 00:30:18.814 }, 00:30:18.814 { 00:30:18.814 "name": "BaseBdev2", 00:30:18.814 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:18.814 "is_configured": true, 00:30:18.814 "data_offset": 2048, 00:30:18.814 "data_size": 63488 00:30:18.814 }, 00:30:18.814 { 00:30:18.814 "name": "BaseBdev3", 00:30:18.814 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:18.814 "is_configured": true, 00:30:18.814 "data_offset": 2048, 00:30:18.814 "data_size": 63488 00:30:18.814 }, 00:30:18.814 { 00:30:18.814 "name": "BaseBdev4", 00:30:18.814 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:18.814 "is_configured": true, 00:30:18.814 "data_offset": 2048, 00:30:18.814 "data_size": 63488 00:30:18.814 } 00:30:18.814 ] 00:30:18.814 }' 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:18.814 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:19.072 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:19.072 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:19.072 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:19.331 [2024-07-23 15:24:14.732046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:19.331 [2024-07-23 15:24:14.732133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.331 [2024-07-23 15:24:14.732167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:30:19.331 [2024-07-23 15:24:14.732183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.331 [2024-07-23 15:24:14.732625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.331 [2024-07-23 15:24:14.732662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:19.331 [2024-07-23 15:24:14.732735] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:19.331 [2024-07-23 15:24:14.732754] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:19.331 [2024-07-23 15:24:14.732771] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:19.331 BaseBdev1 00:30:19.331 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.708 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:20.708 "name": "raid_bdev1", 00:30:20.708 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:20.708 "strip_size_kb": 64, 00:30:20.708 "state": "online", 00:30:20.708 "raid_level": "raid5f", 00:30:20.708 "superblock": true, 00:30:20.708 "num_base_bdevs": 4, 00:30:20.708 "num_base_bdevs_discovered": 3, 00:30:20.708 "num_base_bdevs_operational": 3, 00:30:20.708 "base_bdevs_list": [ 00:30:20.708 { 00:30:20.708 "name": null, 00:30:20.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.708 "is_configured": false, 00:30:20.709 "data_offset": 2048, 00:30:20.709 "data_size": 63488 00:30:20.709 }, 00:30:20.709 { 00:30:20.709 "name": "BaseBdev2", 00:30:20.709 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:20.709 "is_configured": true, 00:30:20.709 "data_offset": 2048, 00:30:20.709 "data_size": 63488 00:30:20.709 }, 00:30:20.709 { 00:30:20.709 "name": "BaseBdev3", 00:30:20.709 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:20.709 "is_configured": true, 00:30:20.709 "data_offset": 2048, 00:30:20.709 "data_size": 63488 00:30:20.709 }, 00:30:20.709 { 00:30:20.709 "name": "BaseBdev4", 00:30:20.709 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:20.709 "is_configured": true, 00:30:20.709 "data_offset": 2048, 00:30:20.709 "data_size": 63488 00:30:20.709 } 00:30:20.709 ] 00:30:20.709 }' 00:30:20.709 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:20.709 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:20.967 "name": "raid_bdev1", 00:30:20.967 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:20.967 "strip_size_kb": 64, 00:30:20.967 "state": "online", 00:30:20.967 "raid_level": "raid5f", 00:30:20.967 "superblock": true, 00:30:20.967 "num_base_bdevs": 4, 00:30:20.967 "num_base_bdevs_discovered": 3, 00:30:20.967 "num_base_bdevs_operational": 3, 00:30:20.967 "base_bdevs_list": [ 00:30:20.967 { 00:30:20.967 "name": null, 00:30:20.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.967 "is_configured": false, 00:30:20.967 "data_offset": 2048, 00:30:20.967 "data_size": 63488 00:30:20.967 }, 00:30:20.967 { 00:30:20.967 "name": "BaseBdev2", 00:30:20.967 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:20.967 "is_configured": true, 00:30:20.967 "data_offset": 2048, 00:30:20.967 "data_size": 63488 00:30:20.967 }, 00:30:20.967 { 00:30:20.967 "name": "BaseBdev3", 00:30:20.967 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:20.967 "is_configured": true, 00:30:20.967 "data_offset": 2048, 00:30:20.967 "data_size": 63488 00:30:20.967 }, 00:30:20.967 { 00:30:20.967 "name": "BaseBdev4", 00:30:20.967 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:20.967 "is_configured": true, 00:30:20.967 "data_offset": 2048, 00:30:20.967 "data_size": 63488 00:30:20.967 } 00:30:20.967 ] 00:30:20.967 }' 00:30:20.967 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:21.225 [2024-07-23 15:24:16.580506] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:21.225 [2024-07-23 15:24:16.580685] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:21.225 [2024-07-23 15:24:16.580708] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:21.225 request: 00:30:21.225 { 00:30:21.225 "base_bdev": "BaseBdev1", 00:30:21.225 "raid_bdev": "raid_bdev1", 00:30:21.225 "method": "bdev_raid_add_base_bdev", 00:30:21.225 "req_id": 1 00:30:21.225 } 00:30:21.225 Got JSON-RPC error response 00:30:21.225 response: 00:30:21.225 { 00:30:21.225 "code": -22, 00:30:21.225 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:21.225 } 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:21.225 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:22.600 "name": "raid_bdev1", 00:30:22.600 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:22.600 "strip_size_kb": 64, 00:30:22.600 "state": "online", 00:30:22.600 "raid_level": "raid5f", 00:30:22.600 "superblock": true, 00:30:22.600 "num_base_bdevs": 4, 00:30:22.600 "num_base_bdevs_discovered": 3, 00:30:22.600 "num_base_bdevs_operational": 3, 00:30:22.600 "base_bdevs_list": [ 00:30:22.600 { 00:30:22.600 "name": null, 00:30:22.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.600 "is_configured": false, 00:30:22.600 "data_offset": 2048, 00:30:22.600 "data_size": 63488 00:30:22.600 }, 00:30:22.600 { 00:30:22.600 "name": "BaseBdev2", 00:30:22.600 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:22.600 "is_configured": true, 00:30:22.600 "data_offset": 2048, 00:30:22.600 "data_size": 63488 00:30:22.600 }, 00:30:22.600 { 00:30:22.600 "name": "BaseBdev3", 00:30:22.600 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:22.600 "is_configured": true, 00:30:22.600 "data_offset": 2048, 00:30:22.600 "data_size": 63488 00:30:22.600 }, 00:30:22.600 { 00:30:22.600 "name": "BaseBdev4", 00:30:22.600 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:22.600 "is_configured": true, 00:30:22.600 "data_offset": 2048, 00:30:22.600 "data_size": 63488 00:30:22.600 } 00:30:22.600 ] 00:30:22.600 }' 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:22.600 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.859 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:22.859 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:22.859 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:22.859 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:22.859 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:22.859 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.859 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.117 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.117 "name": "raid_bdev1", 00:30:23.117 "uuid": "e39a4fc2-437d-4247-919b-e2d392e9b065", 00:30:23.117 "strip_size_kb": 64, 00:30:23.117 "state": "online", 00:30:23.117 "raid_level": "raid5f", 00:30:23.117 "superblock": true, 00:30:23.117 "num_base_bdevs": 4, 00:30:23.117 "num_base_bdevs_discovered": 3, 00:30:23.117 "num_base_bdevs_operational": 3, 00:30:23.117 "base_bdevs_list": [ 00:30:23.117 { 00:30:23.117 "name": null, 00:30:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.117 "is_configured": false, 00:30:23.117 "data_offset": 2048, 00:30:23.117 "data_size": 63488 00:30:23.117 }, 00:30:23.117 { 00:30:23.117 "name": "BaseBdev2", 00:30:23.117 "uuid": "a460ca4f-ad10-5cd7-83e8-cc73ec7ee418", 00:30:23.117 "is_configured": true, 00:30:23.117 "data_offset": 2048, 00:30:23.117 "data_size": 63488 00:30:23.117 }, 00:30:23.117 { 00:30:23.118 "name": "BaseBdev3", 00:30:23.118 "uuid": "58f676ac-04ff-5f84-9fb3-372cfef19ce7", 00:30:23.118 "is_configured": true, 00:30:23.118 "data_offset": 2048, 00:30:23.118 "data_size": 63488 00:30:23.118 }, 00:30:23.118 { 00:30:23.118 "name": "BaseBdev4", 00:30:23.118 "uuid": "e078017b-4999-5536-a8f2-de1c0e9057a5", 00:30:23.118 "is_configured": true, 00:30:23.118 "data_offset": 2048, 00:30:23.118 "data_size": 63488 00:30:23.118 } 00:30:23.118 ] 00:30:23.118 }' 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 119356 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 119356 ']' 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 119356 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119356 00:30:23.118 killing process with pid 119356 00:30:23.118 Received shutdown signal, test time was about 60.000000 seconds 00:30:23.118 00:30:23.118 Latency(us) 00:30:23.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.118 =================================================================================================================== 00:30:23.118 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119356' 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 119356 00:30:23.118 [2024-07-23 15:24:18.501293] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:23.118 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 119356 00:30:23.118 [2024-07-23 15:24:18.501429] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:23.118 [2024-07-23 15:24:18.501510] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:23.118 [2024-07-23 15:24:18.501525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:30:23.376 [2024-07-23 15:24:18.554407] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:23.376 ************************************ 00:30:23.376 END TEST raid5f_rebuild_test_sb 00:30:23.376 ************************************ 00:30:23.376 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:30:23.376 00:30:23.376 real 0m32.754s 00:30:23.376 user 0m46.127s 00:30:23.376 sys 0m5.163s 00:30:23.376 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:23.376 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:23.635 15:24:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:23.635 15:24:18 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:30:23.635 15:24:18 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:30:23.635 15:24:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:30:23.635 15:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.635 15:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:23.635 ************************************ 00:30:23.635 START TEST raid_state_function_test_sb_4k 00:30:23.635 ************************************ 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:23.635 Process raid pid: 120252 00:30:23.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=120252 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 120252' 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 120252 /var/tmp/spdk-raid.sock 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 120252 ']' 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:23.635 15:24:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:23.635 [2024-07-23 15:24:18.931867] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:30:23.635 [2024-07-23 15:24:18.932271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.894 [2024-07-23 15:24:19.086737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.894 [2024-07-23 15:24:19.133506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.894 [2024-07-23 15:24:19.179067] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:24.461 15:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:24.461 15:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:30:24.461 15:24:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:24.720 [2024-07-23 15:24:20.009508] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:24.720 [2024-07-23 15:24:20.009723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:24.720 [2024-07-23 15:24:20.009839] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:24.720 [2024-07-23 15:24:20.009892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.720 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.978 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:24.978 "name": "Existed_Raid", 00:30:24.978 "uuid": "8d10af95-4ae0-46f4-9f12-e95d65f0ff8d", 00:30:24.978 "strip_size_kb": 0, 00:30:24.978 "state": "configuring", 00:30:24.978 "raid_level": "raid1", 00:30:24.978 "superblock": true, 00:30:24.978 "num_base_bdevs": 2, 00:30:24.978 "num_base_bdevs_discovered": 0, 00:30:24.978 "num_base_bdevs_operational": 2, 00:30:24.978 "base_bdevs_list": [ 00:30:24.978 { 00:30:24.978 "name": "BaseBdev1", 00:30:24.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.978 "is_configured": false, 00:30:24.978 "data_offset": 0, 00:30:24.978 "data_size": 0 00:30:24.978 }, 00:30:24.978 { 00:30:24.978 "name": "BaseBdev2", 00:30:24.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.978 "is_configured": false, 00:30:24.978 "data_offset": 0, 00:30:24.978 "data_size": 0 00:30:24.978 } 00:30:24.978 ] 00:30:24.978 }' 00:30:24.978 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:24.978 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:25.249 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:25.523 [2024-07-23 15:24:20.825565] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:25.523 [2024-07-23 15:24:20.825826] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:30:25.523 15:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:25.782 [2024-07-23 15:24:21.001624] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:25.782 [2024-07-23 15:24:21.001905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:25.782 [2024-07-23 15:24:21.001927] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:25.782 [2024-07-23 15:24:21.001942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:25.782 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:30:25.782 [2024-07-23 15:24:21.183354] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:25.782 BaseBdev1 00:30:25.782 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:25.782 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:30:25.782 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:25.782 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:30:25.782 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:25.782 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:25.782 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:26.041 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:26.300 [ 00:30:26.300 { 00:30:26.300 "name": "BaseBdev1", 00:30:26.300 "aliases": [ 00:30:26.300 "bd81c02e-ff65-4408-b5d8-fab3b28d383e" 00:30:26.300 ], 00:30:26.300 "product_name": "Malloc disk", 00:30:26.300 "block_size": 4096, 00:30:26.300 "num_blocks": 8192, 00:30:26.300 "uuid": "bd81c02e-ff65-4408-b5d8-fab3b28d383e", 00:30:26.300 "assigned_rate_limits": { 00:30:26.300 "rw_ios_per_sec": 0, 00:30:26.300 "rw_mbytes_per_sec": 0, 00:30:26.300 "r_mbytes_per_sec": 0, 00:30:26.300 "w_mbytes_per_sec": 0 00:30:26.300 }, 00:30:26.300 "claimed": true, 00:30:26.300 "claim_type": "exclusive_write", 00:30:26.300 "zoned": false, 00:30:26.300 "supported_io_types": { 00:30:26.300 "read": true, 00:30:26.300 "write": true, 00:30:26.300 "unmap": true, 00:30:26.300 "flush": true, 00:30:26.300 "reset": true, 00:30:26.300 "nvme_admin": false, 00:30:26.300 "nvme_io": false, 00:30:26.300 "nvme_io_md": false, 00:30:26.300 "write_zeroes": true, 00:30:26.300 "zcopy": true, 00:30:26.300 "get_zone_info": false, 00:30:26.300 "zone_management": false, 00:30:26.300 "zone_append": false, 00:30:26.300 "compare": false, 00:30:26.300 "compare_and_write": false, 00:30:26.300 "abort": true, 00:30:26.300 "seek_hole": false, 00:30:26.300 "seek_data": false, 00:30:26.300 "copy": true, 00:30:26.300 "nvme_iov_md": false 00:30:26.300 }, 00:30:26.300 "memory_domains": [ 00:30:26.300 { 00:30:26.300 "dma_device_id": "system", 00:30:26.300 "dma_device_type": 1 00:30:26.300 }, 00:30:26.300 { 00:30:26.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.300 "dma_device_type": 2 00:30:26.300 } 00:30:26.300 ], 00:30:26.300 "driver_specific": {} 00:30:26.300 } 00:30:26.300 ] 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.300 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.559 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.559 "name": "Existed_Raid", 00:30:26.559 "uuid": "0825bfb5-7de3-4338-ba2a-c2e97e0e3c24", 00:30:26.559 "strip_size_kb": 0, 00:30:26.559 "state": "configuring", 00:30:26.559 "raid_level": "raid1", 00:30:26.559 "superblock": true, 00:30:26.559 "num_base_bdevs": 2, 00:30:26.559 "num_base_bdevs_discovered": 1, 00:30:26.559 "num_base_bdevs_operational": 2, 00:30:26.559 "base_bdevs_list": [ 00:30:26.559 { 00:30:26.559 "name": "BaseBdev1", 00:30:26.560 "uuid": "bd81c02e-ff65-4408-b5d8-fab3b28d383e", 00:30:26.560 "is_configured": true, 00:30:26.560 "data_offset": 256, 00:30:26.560 "data_size": 7936 00:30:26.560 }, 00:30:26.560 { 00:30:26.560 "name": "BaseBdev2", 00:30:26.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.560 "is_configured": false, 00:30:26.560 "data_offset": 0, 00:30:26.560 "data_size": 0 00:30:26.560 } 00:30:26.560 ] 00:30:26.560 }' 00:30:26.560 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.560 15:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:26.818 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:27.077 [2024-07-23 15:24:22.383748] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:27.077 [2024-07-23 15:24:22.384006] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:30:27.077 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:27.335 [2024-07-23 15:24:22.571868] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:27.335 [2024-07-23 15:24:22.574186] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:27.335 [2024-07-23 15:24:22.574347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:27.335 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:27.335 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:27.335 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.336 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.594 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:27.594 "name": "Existed_Raid", 00:30:27.594 "uuid": "dc4741f9-2027-4679-83c7-bc3f3ce89034", 00:30:27.594 "strip_size_kb": 0, 00:30:27.594 "state": "configuring", 00:30:27.594 "raid_level": "raid1", 00:30:27.594 "superblock": true, 00:30:27.594 "num_base_bdevs": 2, 00:30:27.594 "num_base_bdevs_discovered": 1, 00:30:27.594 "num_base_bdevs_operational": 2, 00:30:27.594 "base_bdevs_list": [ 00:30:27.594 { 00:30:27.594 "name": "BaseBdev1", 00:30:27.594 "uuid": "bd81c02e-ff65-4408-b5d8-fab3b28d383e", 00:30:27.594 "is_configured": true, 00:30:27.594 "data_offset": 256, 00:30:27.594 "data_size": 7936 00:30:27.594 }, 00:30:27.594 { 00:30:27.594 "name": "BaseBdev2", 00:30:27.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.594 "is_configured": false, 00:30:27.594 "data_offset": 0, 00:30:27.594 "data_size": 0 00:30:27.594 } 00:30:27.594 ] 00:30:27.594 }' 00:30:27.594 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:27.594 15:24:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:30:28.113 [2024-07-23 15:24:23.427593] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:28.113 [2024-07-23 15:24:23.427862] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:30:28.113 [2024-07-23 15:24:23.427893] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:28.113 [2024-07-23 15:24:23.428010] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:30:28.113 [2024-07-23 15:24:23.428427] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:30:28.113 [2024-07-23 15:24:23.428453] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:30:28.113 [2024-07-23 15:24:23.428581] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:28.113 BaseBdev2 00:30:28.113 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:28.113 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:30:28.113 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:28.113 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:30:28.113 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:28.113 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:28.113 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:28.372 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:28.631 [ 00:30:28.631 { 00:30:28.631 "name": "BaseBdev2", 00:30:28.631 "aliases": [ 00:30:28.631 "65e60366-4c80-4446-9d58-5c702ad0e415" 00:30:28.631 ], 00:30:28.631 "product_name": "Malloc disk", 00:30:28.631 "block_size": 4096, 00:30:28.631 "num_blocks": 8192, 00:30:28.631 "uuid": "65e60366-4c80-4446-9d58-5c702ad0e415", 00:30:28.631 "assigned_rate_limits": { 00:30:28.631 "rw_ios_per_sec": 0, 00:30:28.631 "rw_mbytes_per_sec": 0, 00:30:28.631 "r_mbytes_per_sec": 0, 00:30:28.631 "w_mbytes_per_sec": 0 00:30:28.631 }, 00:30:28.631 "claimed": true, 00:30:28.631 "claim_type": "exclusive_write", 00:30:28.631 "zoned": false, 00:30:28.631 "supported_io_types": { 00:30:28.631 "read": true, 00:30:28.631 "write": true, 00:30:28.631 "unmap": true, 00:30:28.631 "flush": true, 00:30:28.631 "reset": true, 00:30:28.631 "nvme_admin": false, 00:30:28.631 "nvme_io": false, 00:30:28.631 "nvme_io_md": false, 00:30:28.631 "write_zeroes": true, 00:30:28.631 "zcopy": true, 00:30:28.631 "get_zone_info": false, 00:30:28.631 "zone_management": false, 00:30:28.631 "zone_append": false, 00:30:28.631 "compare": false, 00:30:28.631 "compare_and_write": false, 00:30:28.631 "abort": true, 00:30:28.631 "seek_hole": false, 00:30:28.631 "seek_data": false, 00:30:28.631 "copy": true, 00:30:28.631 "nvme_iov_md": false 00:30:28.631 }, 00:30:28.631 "memory_domains": [ 00:30:28.631 { 00:30:28.631 "dma_device_id": "system", 00:30:28.631 "dma_device_type": 1 00:30:28.631 }, 00:30:28.631 { 00:30:28.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.631 "dma_device_type": 2 00:30:28.631 } 00:30:28.631 ], 00:30:28.631 "driver_specific": {} 00:30:28.631 } 00:30:28.631 ] 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.631 15:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.631 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:28.631 "name": "Existed_Raid", 00:30:28.631 "uuid": "dc4741f9-2027-4679-83c7-bc3f3ce89034", 00:30:28.631 "strip_size_kb": 0, 00:30:28.631 "state": "online", 00:30:28.631 "raid_level": "raid1", 00:30:28.631 "superblock": true, 00:30:28.631 "num_base_bdevs": 2, 00:30:28.631 "num_base_bdevs_discovered": 2, 00:30:28.631 "num_base_bdevs_operational": 2, 00:30:28.631 "base_bdevs_list": [ 00:30:28.631 { 00:30:28.631 "name": "BaseBdev1", 00:30:28.631 "uuid": "bd81c02e-ff65-4408-b5d8-fab3b28d383e", 00:30:28.631 "is_configured": true, 00:30:28.631 "data_offset": 256, 00:30:28.631 "data_size": 7936 00:30:28.631 }, 00:30:28.631 { 00:30:28.631 "name": "BaseBdev2", 00:30:28.631 "uuid": "65e60366-4c80-4446-9d58-5c702ad0e415", 00:30:28.631 "is_configured": true, 00:30:28.631 "data_offset": 256, 00:30:28.631 "data_size": 7936 00:30:28.631 } 00:30:28.631 ] 00:30:28.631 }' 00:30:28.631 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:28.631 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:29.198 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:29.198 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:29.198 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:29.198 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:29.198 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:29.198 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:29.198 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:29.198 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:29.199 [2024-07-23 15:24:24.548252] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:29.199 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:29.199 "name": "Existed_Raid", 00:30:29.199 "aliases": [ 00:30:29.199 "dc4741f9-2027-4679-83c7-bc3f3ce89034" 00:30:29.199 ], 00:30:29.199 "product_name": "Raid Volume", 00:30:29.199 "block_size": 4096, 00:30:29.199 "num_blocks": 7936, 00:30:29.199 "uuid": "dc4741f9-2027-4679-83c7-bc3f3ce89034", 00:30:29.199 "assigned_rate_limits": { 00:30:29.199 "rw_ios_per_sec": 0, 00:30:29.199 "rw_mbytes_per_sec": 0, 00:30:29.199 "r_mbytes_per_sec": 0, 00:30:29.199 "w_mbytes_per_sec": 0 00:30:29.199 }, 00:30:29.199 "claimed": false, 00:30:29.199 "zoned": false, 00:30:29.199 "supported_io_types": { 00:30:29.199 "read": true, 00:30:29.199 "write": true, 00:30:29.199 "unmap": false, 00:30:29.199 "flush": false, 00:30:29.199 "reset": true, 00:30:29.199 "nvme_admin": false, 00:30:29.199 "nvme_io": false, 00:30:29.199 "nvme_io_md": false, 00:30:29.199 "write_zeroes": true, 00:30:29.199 "zcopy": false, 00:30:29.199 "get_zone_info": false, 00:30:29.199 "zone_management": false, 00:30:29.199 "zone_append": false, 00:30:29.199 "compare": false, 00:30:29.199 "compare_and_write": false, 00:30:29.199 "abort": false, 00:30:29.199 "seek_hole": false, 00:30:29.199 "seek_data": false, 00:30:29.199 "copy": false, 00:30:29.199 "nvme_iov_md": false 00:30:29.199 }, 00:30:29.199 "memory_domains": [ 00:30:29.199 { 00:30:29.199 "dma_device_id": "system", 00:30:29.199 "dma_device_type": 1 00:30:29.199 }, 00:30:29.199 { 00:30:29.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.199 "dma_device_type": 2 00:30:29.199 }, 00:30:29.199 { 00:30:29.199 "dma_device_id": "system", 00:30:29.199 "dma_device_type": 1 00:30:29.199 }, 00:30:29.199 { 00:30:29.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.199 "dma_device_type": 2 00:30:29.199 } 00:30:29.199 ], 00:30:29.199 "driver_specific": { 00:30:29.199 "raid": { 00:30:29.199 "uuid": "dc4741f9-2027-4679-83c7-bc3f3ce89034", 00:30:29.199 "strip_size_kb": 0, 00:30:29.199 "state": "online", 00:30:29.199 "raid_level": "raid1", 00:30:29.199 "superblock": true, 00:30:29.199 "num_base_bdevs": 2, 00:30:29.199 "num_base_bdevs_discovered": 2, 00:30:29.199 "num_base_bdevs_operational": 2, 00:30:29.199 "base_bdevs_list": [ 00:30:29.199 { 00:30:29.199 "name": "BaseBdev1", 00:30:29.199 "uuid": "bd81c02e-ff65-4408-b5d8-fab3b28d383e", 00:30:29.199 "is_configured": true, 00:30:29.199 "data_offset": 256, 00:30:29.199 "data_size": 7936 00:30:29.199 }, 00:30:29.199 { 00:30:29.199 "name": "BaseBdev2", 00:30:29.199 "uuid": "65e60366-4c80-4446-9d58-5c702ad0e415", 00:30:29.199 "is_configured": true, 00:30:29.199 "data_offset": 256, 00:30:29.199 "data_size": 7936 00:30:29.199 } 00:30:29.199 ] 00:30:29.199 } 00:30:29.199 } 00:30:29.199 }' 00:30:29.199 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:29.199 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:29.199 BaseBdev2' 00:30:29.199 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:29.199 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:29.199 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:29.457 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:29.457 "name": "BaseBdev1", 00:30:29.457 "aliases": [ 00:30:29.457 "bd81c02e-ff65-4408-b5d8-fab3b28d383e" 00:30:29.457 ], 00:30:29.457 "product_name": "Malloc disk", 00:30:29.457 "block_size": 4096, 00:30:29.457 "num_blocks": 8192, 00:30:29.457 "uuid": "bd81c02e-ff65-4408-b5d8-fab3b28d383e", 00:30:29.457 "assigned_rate_limits": { 00:30:29.457 "rw_ios_per_sec": 0, 00:30:29.457 "rw_mbytes_per_sec": 0, 00:30:29.457 "r_mbytes_per_sec": 0, 00:30:29.457 "w_mbytes_per_sec": 0 00:30:29.457 }, 00:30:29.457 "claimed": true, 00:30:29.457 "claim_type": "exclusive_write", 00:30:29.457 "zoned": false, 00:30:29.457 "supported_io_types": { 00:30:29.457 "read": true, 00:30:29.457 "write": true, 00:30:29.457 "unmap": true, 00:30:29.457 "flush": true, 00:30:29.457 "reset": true, 00:30:29.457 "nvme_admin": false, 00:30:29.457 "nvme_io": false, 00:30:29.457 "nvme_io_md": false, 00:30:29.457 "write_zeroes": true, 00:30:29.457 "zcopy": true, 00:30:29.457 "get_zone_info": false, 00:30:29.457 "zone_management": false, 00:30:29.457 "zone_append": false, 00:30:29.457 "compare": false, 00:30:29.457 "compare_and_write": false, 00:30:29.457 "abort": true, 00:30:29.457 "seek_hole": false, 00:30:29.457 "seek_data": false, 00:30:29.457 "copy": true, 00:30:29.457 "nvme_iov_md": false 00:30:29.457 }, 00:30:29.457 "memory_domains": [ 00:30:29.457 { 00:30:29.457 "dma_device_id": "system", 00:30:29.457 "dma_device_type": 1 00:30:29.457 }, 00:30:29.457 { 00:30:29.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.457 "dma_device_type": 2 00:30:29.457 } 00:30:29.457 ], 00:30:29.457 "driver_specific": {} 00:30:29.457 }' 00:30:29.457 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:29.457 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:29.457 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:29.457 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:29.457 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:29.457 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:29.457 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:29.458 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:29.458 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:29.458 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:29.458 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:29.458 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:29.458 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:29.458 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:29.458 15:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:29.716 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:29.716 "name": "BaseBdev2", 00:30:29.716 "aliases": [ 00:30:29.716 "65e60366-4c80-4446-9d58-5c702ad0e415" 00:30:29.716 ], 00:30:29.716 "product_name": "Malloc disk", 00:30:29.716 "block_size": 4096, 00:30:29.716 "num_blocks": 8192, 00:30:29.716 "uuid": "65e60366-4c80-4446-9d58-5c702ad0e415", 00:30:29.716 "assigned_rate_limits": { 00:30:29.716 "rw_ios_per_sec": 0, 00:30:29.717 "rw_mbytes_per_sec": 0, 00:30:29.717 "r_mbytes_per_sec": 0, 00:30:29.717 "w_mbytes_per_sec": 0 00:30:29.717 }, 00:30:29.717 "claimed": true, 00:30:29.717 "claim_type": "exclusive_write", 00:30:29.717 "zoned": false, 00:30:29.717 "supported_io_types": { 00:30:29.717 "read": true, 00:30:29.717 "write": true, 00:30:29.717 "unmap": true, 00:30:29.717 "flush": true, 00:30:29.717 "reset": true, 00:30:29.717 "nvme_admin": false, 00:30:29.717 "nvme_io": false, 00:30:29.717 "nvme_io_md": false, 00:30:29.717 "write_zeroes": true, 00:30:29.717 "zcopy": true, 00:30:29.717 "get_zone_info": false, 00:30:29.717 "zone_management": false, 00:30:29.717 "zone_append": false, 00:30:29.717 "compare": false, 00:30:29.717 "compare_and_write": false, 00:30:29.717 "abort": true, 00:30:29.717 "seek_hole": false, 00:30:29.717 "seek_data": false, 00:30:29.717 "copy": true, 00:30:29.717 "nvme_iov_md": false 00:30:29.717 }, 00:30:29.717 "memory_domains": [ 00:30:29.717 { 00:30:29.717 "dma_device_id": "system", 00:30:29.717 "dma_device_type": 1 00:30:29.717 }, 00:30:29.717 { 00:30:29.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:29.717 "dma_device_type": 2 00:30:29.717 } 00:30:29.717 ], 00:30:29.717 "driver_specific": {} 00:30:29.717 }' 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:29.717 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:29.975 [2024-07-23 15:24:25.296230] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:29.975 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:29.976 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.976 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.976 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.976 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.976 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.976 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.234 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:30.235 "name": "Existed_Raid", 00:30:30.235 "uuid": "dc4741f9-2027-4679-83c7-bc3f3ce89034", 00:30:30.235 "strip_size_kb": 0, 00:30:30.235 "state": "online", 00:30:30.235 "raid_level": "raid1", 00:30:30.235 "superblock": true, 00:30:30.235 "num_base_bdevs": 2, 00:30:30.235 "num_base_bdevs_discovered": 1, 00:30:30.235 "num_base_bdevs_operational": 1, 00:30:30.235 "base_bdevs_list": [ 00:30:30.235 { 00:30:30.235 "name": null, 00:30:30.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.235 "is_configured": false, 00:30:30.235 "data_offset": 256, 00:30:30.235 "data_size": 7936 00:30:30.235 }, 00:30:30.235 { 00:30:30.235 "name": "BaseBdev2", 00:30:30.235 "uuid": "65e60366-4c80-4446-9d58-5c702ad0e415", 00:30:30.235 "is_configured": true, 00:30:30.235 "data_offset": 256, 00:30:30.235 "data_size": 7936 00:30:30.235 } 00:30:30.235 ] 00:30:30.235 }' 00:30:30.235 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:30.235 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:30.494 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:30.494 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:30.494 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.494 15:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:30.753 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:30.753 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:30.753 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:31.011 [2024-07-23 15:24:26.245201] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:31.011 [2024-07-23 15:24:26.245469] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:31.011 [2024-07-23 15:24:26.258233] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:31.012 [2024-07-23 15:24:26.258451] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:31.012 [2024-07-23 15:24:26.258481] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:30:31.012 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:31.012 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:31.012 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.012 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:31.270 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:31.270 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:31.270 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:30:31.270 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 120252 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 120252 ']' 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 120252 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120252 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:31.271 killing process with pid 120252 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120252' 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 120252 00:30:31.271 [2024-07-23 15:24:26.489708] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:31.271 [2024-07-23 15:24:26.489798] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:31.271 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 120252 00:30:31.530 ************************************ 00:30:31.530 END TEST raid_state_function_test_sb_4k 00:30:31.530 ************************************ 00:30:31.530 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:30:31.530 00:30:31.530 real 0m7.883s 00:30:31.530 user 0m13.135s 00:30:31.530 sys 0m1.767s 00:30:31.530 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.530 15:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:31.530 15:24:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:31.530 15:24:26 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:30:31.530 15:24:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:30:31.530 15:24:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.530 15:24:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:31.530 ************************************ 00:30:31.530 START TEST raid_superblock_test_4k 00:30:31.530 ************************************ 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:30:31.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=120571 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 120571 /var/tmp/spdk-raid.sock 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 120571 ']' 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:31.530 15:24:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:30:31.530 [2024-07-23 15:24:26.872382] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:30:31.530 [2024-07-23 15:24:26.872890] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120571 ] 00:30:31.788 [2024-07-23 15:24:27.021962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.788 [2024-07-23 15:24:27.069575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.788 [2024-07-23 15:24:27.115066] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:32.354 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:30:32.612 malloc1 00:30:32.612 15:24:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:32.871 [2024-07-23 15:24:28.111332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:32.871 [2024-07-23 15:24:28.111644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:32.871 [2024-07-23 15:24:28.111778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:30:32.871 [2024-07-23 15:24:28.111899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:32.871 [2024-07-23 15:24:28.114462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:32.871 [2024-07-23 15:24:28.114632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:32.871 pt1 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:32.871 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:30:33.130 malloc2 00:30:33.130 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:33.417 [2024-07-23 15:24:28.565155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:33.417 [2024-07-23 15:24:28.565443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.417 [2024-07-23 15:24:28.565478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:30:33.417 [2024-07-23 15:24:28.565496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.417 [2024-07-23 15:24:28.568000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.417 [2024-07-23 15:24:28.568045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:33.417 pt2 00:30:33.417 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:30:33.418 [2024-07-23 15:24:28.737293] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:33.418 [2024-07-23 15:24:28.739488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:33.418 [2024-07-23 15:24:28.739692] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006c80 00:30:33.418 [2024-07-23 15:24:28.739715] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:33.418 [2024-07-23 15:24:28.739850] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:30:33.418 [2024-07-23 15:24:28.740205] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006c80 00:30:33.418 [2024-07-23 15:24:28.740224] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000006c80 00:30:33.418 [2024-07-23 15:24:28.740358] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.418 15:24:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.676 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:33.676 "name": "raid_bdev1", 00:30:33.676 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:33.676 "strip_size_kb": 0, 00:30:33.676 "state": "online", 00:30:33.676 "raid_level": "raid1", 00:30:33.676 "superblock": true, 00:30:33.676 "num_base_bdevs": 2, 00:30:33.676 "num_base_bdevs_discovered": 2, 00:30:33.676 "num_base_bdevs_operational": 2, 00:30:33.676 "base_bdevs_list": [ 00:30:33.676 { 00:30:33.676 "name": "pt1", 00:30:33.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:33.676 "is_configured": true, 00:30:33.676 "data_offset": 256, 00:30:33.676 "data_size": 7936 00:30:33.676 }, 00:30:33.676 { 00:30:33.676 "name": "pt2", 00:30:33.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:33.676 "is_configured": true, 00:30:33.676 "data_offset": 256, 00:30:33.676 "data_size": 7936 00:30:33.676 } 00:30:33.676 ] 00:30:33.676 }' 00:30:33.676 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:33.676 15:24:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:33.935 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:30:33.935 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:33.935 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:33.935 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:33.935 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:33.935 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:33.935 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:33.935 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:34.193 [2024-07-23 15:24:29.553691] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:34.193 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:34.193 "name": "raid_bdev1", 00:30:34.193 "aliases": [ 00:30:34.193 "7c7d43fe-c812-4051-9fde-b1d493d59070" 00:30:34.193 ], 00:30:34.193 "product_name": "Raid Volume", 00:30:34.193 "block_size": 4096, 00:30:34.193 "num_blocks": 7936, 00:30:34.193 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:34.193 "assigned_rate_limits": { 00:30:34.193 "rw_ios_per_sec": 0, 00:30:34.193 "rw_mbytes_per_sec": 0, 00:30:34.193 "r_mbytes_per_sec": 0, 00:30:34.193 "w_mbytes_per_sec": 0 00:30:34.193 }, 00:30:34.193 "claimed": false, 00:30:34.193 "zoned": false, 00:30:34.193 "supported_io_types": { 00:30:34.193 "read": true, 00:30:34.193 "write": true, 00:30:34.193 "unmap": false, 00:30:34.193 "flush": false, 00:30:34.193 "reset": true, 00:30:34.193 "nvme_admin": false, 00:30:34.193 "nvme_io": false, 00:30:34.193 "nvme_io_md": false, 00:30:34.193 "write_zeroes": true, 00:30:34.193 "zcopy": false, 00:30:34.193 "get_zone_info": false, 00:30:34.193 "zone_management": false, 00:30:34.193 "zone_append": false, 00:30:34.193 "compare": false, 00:30:34.193 "compare_and_write": false, 00:30:34.193 "abort": false, 00:30:34.194 "seek_hole": false, 00:30:34.194 "seek_data": false, 00:30:34.194 "copy": false, 00:30:34.194 "nvme_iov_md": false 00:30:34.194 }, 00:30:34.194 "memory_domains": [ 00:30:34.194 { 00:30:34.194 "dma_device_id": "system", 00:30:34.194 "dma_device_type": 1 00:30:34.194 }, 00:30:34.194 { 00:30:34.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.194 "dma_device_type": 2 00:30:34.194 }, 00:30:34.194 { 00:30:34.194 "dma_device_id": "system", 00:30:34.194 "dma_device_type": 1 00:30:34.194 }, 00:30:34.194 { 00:30:34.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.194 "dma_device_type": 2 00:30:34.194 } 00:30:34.194 ], 00:30:34.194 "driver_specific": { 00:30:34.194 "raid": { 00:30:34.194 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:34.194 "strip_size_kb": 0, 00:30:34.194 "state": "online", 00:30:34.194 "raid_level": "raid1", 00:30:34.194 "superblock": true, 00:30:34.194 "num_base_bdevs": 2, 00:30:34.194 "num_base_bdevs_discovered": 2, 00:30:34.194 "num_base_bdevs_operational": 2, 00:30:34.194 "base_bdevs_list": [ 00:30:34.194 { 00:30:34.194 "name": "pt1", 00:30:34.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:34.194 "is_configured": true, 00:30:34.194 "data_offset": 256, 00:30:34.194 "data_size": 7936 00:30:34.194 }, 00:30:34.194 { 00:30:34.194 "name": "pt2", 00:30:34.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:34.194 "is_configured": true, 00:30:34.194 "data_offset": 256, 00:30:34.194 "data_size": 7936 00:30:34.194 } 00:30:34.194 ] 00:30:34.194 } 00:30:34.194 } 00:30:34.194 }' 00:30:34.194 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:34.194 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:34.194 pt2' 00:30:34.194 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:34.194 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:34.194 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:34.452 "name": "pt1", 00:30:34.452 "aliases": [ 00:30:34.452 "00000000-0000-0000-0000-000000000001" 00:30:34.452 ], 00:30:34.452 "product_name": "passthru", 00:30:34.452 "block_size": 4096, 00:30:34.452 "num_blocks": 8192, 00:30:34.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:34.452 "assigned_rate_limits": { 00:30:34.452 "rw_ios_per_sec": 0, 00:30:34.452 "rw_mbytes_per_sec": 0, 00:30:34.452 "r_mbytes_per_sec": 0, 00:30:34.452 "w_mbytes_per_sec": 0 00:30:34.452 }, 00:30:34.452 "claimed": true, 00:30:34.452 "claim_type": "exclusive_write", 00:30:34.452 "zoned": false, 00:30:34.452 "supported_io_types": { 00:30:34.452 "read": true, 00:30:34.452 "write": true, 00:30:34.452 "unmap": true, 00:30:34.452 "flush": true, 00:30:34.452 "reset": true, 00:30:34.452 "nvme_admin": false, 00:30:34.452 "nvme_io": false, 00:30:34.452 "nvme_io_md": false, 00:30:34.452 "write_zeroes": true, 00:30:34.452 "zcopy": true, 00:30:34.452 "get_zone_info": false, 00:30:34.452 "zone_management": false, 00:30:34.452 "zone_append": false, 00:30:34.452 "compare": false, 00:30:34.452 "compare_and_write": false, 00:30:34.452 "abort": true, 00:30:34.452 "seek_hole": false, 00:30:34.452 "seek_data": false, 00:30:34.452 "copy": true, 00:30:34.452 "nvme_iov_md": false 00:30:34.452 }, 00:30:34.452 "memory_domains": [ 00:30:34.452 { 00:30:34.452 "dma_device_id": "system", 00:30:34.452 "dma_device_type": 1 00:30:34.452 }, 00:30:34.452 { 00:30:34.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.452 "dma_device_type": 2 00:30:34.452 } 00:30:34.452 ], 00:30:34.452 "driver_specific": { 00:30:34.452 "passthru": { 00:30:34.452 "name": "pt1", 00:30:34.452 "base_bdev_name": "malloc1" 00:30:34.452 } 00:30:34.452 } 00:30:34.452 }' 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:34.452 15:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:34.711 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:34.711 "name": "pt2", 00:30:34.711 "aliases": [ 00:30:34.711 "00000000-0000-0000-0000-000000000002" 00:30:34.711 ], 00:30:34.711 "product_name": "passthru", 00:30:34.711 "block_size": 4096, 00:30:34.711 "num_blocks": 8192, 00:30:34.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:34.711 "assigned_rate_limits": { 00:30:34.711 "rw_ios_per_sec": 0, 00:30:34.711 "rw_mbytes_per_sec": 0, 00:30:34.711 "r_mbytes_per_sec": 0, 00:30:34.711 "w_mbytes_per_sec": 0 00:30:34.711 }, 00:30:34.711 "claimed": true, 00:30:34.711 "claim_type": "exclusive_write", 00:30:34.711 "zoned": false, 00:30:34.711 "supported_io_types": { 00:30:34.711 "read": true, 00:30:34.711 "write": true, 00:30:34.711 "unmap": true, 00:30:34.711 "flush": true, 00:30:34.711 "reset": true, 00:30:34.711 "nvme_admin": false, 00:30:34.711 "nvme_io": false, 00:30:34.711 "nvme_io_md": false, 00:30:34.711 "write_zeroes": true, 00:30:34.711 "zcopy": true, 00:30:34.711 "get_zone_info": false, 00:30:34.711 "zone_management": false, 00:30:34.711 "zone_append": false, 00:30:34.711 "compare": false, 00:30:34.711 "compare_and_write": false, 00:30:34.711 "abort": true, 00:30:34.711 "seek_hole": false, 00:30:34.711 "seek_data": false, 00:30:34.711 "copy": true, 00:30:34.711 "nvme_iov_md": false 00:30:34.711 }, 00:30:34.711 "memory_domains": [ 00:30:34.711 { 00:30:34.711 "dma_device_id": "system", 00:30:34.711 "dma_device_type": 1 00:30:34.711 }, 00:30:34.711 { 00:30:34.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.712 "dma_device_type": 2 00:30:34.712 } 00:30:34.712 ], 00:30:34.712 "driver_specific": { 00:30:34.712 "passthru": { 00:30:34.712 "name": "pt2", 00:30:34.712 "base_bdev_name": "malloc2" 00:30:34.712 } 00:30:34.712 } 00:30:34.712 }' 00:30:34.970 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:34.970 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:34.970 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:34.970 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:34.971 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:30:35.229 [2024-07-23 15:24:30.477864] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:35.229 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7c7d43fe-c812-4051-9fde-b1d493d59070 00:30:35.229 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 7c7d43fe-c812-4051-9fde-b1d493d59070 ']' 00:30:35.229 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:35.488 [2024-07-23 15:24:30.737603] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:35.488 [2024-07-23 15:24:30.737658] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:35.488 [2024-07-23 15:24:30.737750] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:35.488 [2024-07-23 15:24:30.737843] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:35.488 [2024-07-23 15:24:30.737862] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006c80 name raid_bdev1, state offline 00:30:35.488 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.488 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:30:35.747 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:30:35.747 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:30:35.747 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:30:35.747 15:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:35.747 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:30:35.747 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:36.006 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:36.006 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:36.265 [2024-07-23 15:24:31.625839] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:36.265 [2024-07-23 15:24:31.627996] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:36.265 [2024-07-23 15:24:31.628074] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:36.265 [2024-07-23 15:24:31.628133] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:36.265 [2024-07-23 15:24:31.628155] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:36.265 [2024-07-23 15:24:31.628165] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state configuring 00:30:36.265 request: 00:30:36.265 { 00:30:36.265 "name": "raid_bdev1", 00:30:36.265 "raid_level": "raid1", 00:30:36.265 "base_bdevs": [ 00:30:36.265 "malloc1", 00:30:36.265 "malloc2" 00:30:36.265 ], 00:30:36.265 "superblock": false, 00:30:36.265 "method": "bdev_raid_create", 00:30:36.265 "req_id": 1 00:30:36.265 } 00:30:36.265 Got JSON-RPC error response 00:30:36.265 response: 00:30:36.265 { 00:30:36.265 "code": -17, 00:30:36.265 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:36.265 } 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.265 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:30:36.523 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:30:36.523 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:30:36.523 15:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:36.782 [2024-07-23 15:24:31.985887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:36.782 [2024-07-23 15:24:31.986151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:36.782 [2024-07-23 15:24:31.986214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:30:36.782 [2024-07-23 15:24:31.986289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:36.782 [2024-07-23 15:24:31.988847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:36.782 [2024-07-23 15:24:31.988987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:36.782 [2024-07-23 15:24:31.989144] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:36.782 [2024-07-23 15:24:31.989215] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:36.782 pt1 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:36.782 "name": "raid_bdev1", 00:30:36.782 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:36.782 "strip_size_kb": 0, 00:30:36.782 "state": "configuring", 00:30:36.782 "raid_level": "raid1", 00:30:36.782 "superblock": true, 00:30:36.782 "num_base_bdevs": 2, 00:30:36.782 "num_base_bdevs_discovered": 1, 00:30:36.782 "num_base_bdevs_operational": 2, 00:30:36.782 "base_bdevs_list": [ 00:30:36.782 { 00:30:36.782 "name": "pt1", 00:30:36.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:36.782 "is_configured": true, 00:30:36.782 "data_offset": 256, 00:30:36.782 "data_size": 7936 00:30:36.782 }, 00:30:36.782 { 00:30:36.782 "name": null, 00:30:36.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:36.782 "is_configured": false, 00:30:36.782 "data_offset": 256, 00:30:36.782 "data_size": 7936 00:30:36.782 } 00:30:36.782 ] 00:30:36.782 }' 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:36.782 15:24:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:37.349 [2024-07-23 15:24:32.746022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:37.349 [2024-07-23 15:24:32.746242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:37.349 [2024-07-23 15:24:32.746280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:30:37.349 [2024-07-23 15:24:32.746293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:37.349 [2024-07-23 15:24:32.746711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:37.349 [2024-07-23 15:24:32.746730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:37.349 [2024-07-23 15:24:32.746824] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:37.349 [2024-07-23 15:24:32.746848] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:37.349 [2024-07-23 15:24:32.746977] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:30:37.349 [2024-07-23 15:24:32.746988] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:37.349 [2024-07-23 15:24:32.747068] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:30:37.349 [2024-07-23 15:24:32.747348] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:30:37.349 [2024-07-23 15:24:32.747369] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:30:37.349 [2024-07-23 15:24:32.747466] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:37.349 pt2 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.349 15:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:37.607 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:37.607 "name": "raid_bdev1", 00:30:37.607 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:37.607 "strip_size_kb": 0, 00:30:37.607 "state": "online", 00:30:37.607 "raid_level": "raid1", 00:30:37.607 "superblock": true, 00:30:37.607 "num_base_bdevs": 2, 00:30:37.607 "num_base_bdevs_discovered": 2, 00:30:37.608 "num_base_bdevs_operational": 2, 00:30:37.608 "base_bdevs_list": [ 00:30:37.608 { 00:30:37.608 "name": "pt1", 00:30:37.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:37.608 "is_configured": true, 00:30:37.608 "data_offset": 256, 00:30:37.608 "data_size": 7936 00:30:37.608 }, 00:30:37.608 { 00:30:37.608 "name": "pt2", 00:30:37.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:37.608 "is_configured": true, 00:30:37.608 "data_offset": 256, 00:30:37.608 "data_size": 7936 00:30:37.608 } 00:30:37.608 ] 00:30:37.608 }' 00:30:37.608 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:37.608 15:24:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:38.236 [2024-07-23 15:24:33.602453] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:38.236 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:38.236 "name": "raid_bdev1", 00:30:38.236 "aliases": [ 00:30:38.236 "7c7d43fe-c812-4051-9fde-b1d493d59070" 00:30:38.236 ], 00:30:38.236 "product_name": "Raid Volume", 00:30:38.236 "block_size": 4096, 00:30:38.236 "num_blocks": 7936, 00:30:38.236 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:38.236 "assigned_rate_limits": { 00:30:38.236 "rw_ios_per_sec": 0, 00:30:38.236 "rw_mbytes_per_sec": 0, 00:30:38.236 "r_mbytes_per_sec": 0, 00:30:38.236 "w_mbytes_per_sec": 0 00:30:38.236 }, 00:30:38.236 "claimed": false, 00:30:38.236 "zoned": false, 00:30:38.236 "supported_io_types": { 00:30:38.236 "read": true, 00:30:38.236 "write": true, 00:30:38.236 "unmap": false, 00:30:38.236 "flush": false, 00:30:38.236 "reset": true, 00:30:38.236 "nvme_admin": false, 00:30:38.236 "nvme_io": false, 00:30:38.236 "nvme_io_md": false, 00:30:38.236 "write_zeroes": true, 00:30:38.236 "zcopy": false, 00:30:38.236 "get_zone_info": false, 00:30:38.236 "zone_management": false, 00:30:38.236 "zone_append": false, 00:30:38.237 "compare": false, 00:30:38.237 "compare_and_write": false, 00:30:38.237 "abort": false, 00:30:38.237 "seek_hole": false, 00:30:38.237 "seek_data": false, 00:30:38.237 "copy": false, 00:30:38.237 "nvme_iov_md": false 00:30:38.237 }, 00:30:38.237 "memory_domains": [ 00:30:38.237 { 00:30:38.237 "dma_device_id": "system", 00:30:38.237 "dma_device_type": 1 00:30:38.237 }, 00:30:38.237 { 00:30:38.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.237 "dma_device_type": 2 00:30:38.237 }, 00:30:38.237 { 00:30:38.237 "dma_device_id": "system", 00:30:38.237 "dma_device_type": 1 00:30:38.237 }, 00:30:38.237 { 00:30:38.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.237 "dma_device_type": 2 00:30:38.237 } 00:30:38.237 ], 00:30:38.237 "driver_specific": { 00:30:38.237 "raid": { 00:30:38.237 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:38.237 "strip_size_kb": 0, 00:30:38.237 "state": "online", 00:30:38.237 "raid_level": "raid1", 00:30:38.237 "superblock": true, 00:30:38.237 "num_base_bdevs": 2, 00:30:38.237 "num_base_bdevs_discovered": 2, 00:30:38.237 "num_base_bdevs_operational": 2, 00:30:38.237 "base_bdevs_list": [ 00:30:38.237 { 00:30:38.237 "name": "pt1", 00:30:38.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:38.237 "is_configured": true, 00:30:38.237 "data_offset": 256, 00:30:38.237 "data_size": 7936 00:30:38.237 }, 00:30:38.237 { 00:30:38.237 "name": "pt2", 00:30:38.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:38.237 "is_configured": true, 00:30:38.237 "data_offset": 256, 00:30:38.237 "data_size": 7936 00:30:38.237 } 00:30:38.237 ] 00:30:38.237 } 00:30:38.237 } 00:30:38.237 }' 00:30:38.237 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:38.237 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:38.237 pt2' 00:30:38.237 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:38.237 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:38.237 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:38.495 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:38.495 "name": "pt1", 00:30:38.495 "aliases": [ 00:30:38.495 "00000000-0000-0000-0000-000000000001" 00:30:38.495 ], 00:30:38.495 "product_name": "passthru", 00:30:38.495 "block_size": 4096, 00:30:38.495 "num_blocks": 8192, 00:30:38.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:38.495 "assigned_rate_limits": { 00:30:38.495 "rw_ios_per_sec": 0, 00:30:38.495 "rw_mbytes_per_sec": 0, 00:30:38.495 "r_mbytes_per_sec": 0, 00:30:38.495 "w_mbytes_per_sec": 0 00:30:38.495 }, 00:30:38.495 "claimed": true, 00:30:38.495 "claim_type": "exclusive_write", 00:30:38.495 "zoned": false, 00:30:38.495 "supported_io_types": { 00:30:38.495 "read": true, 00:30:38.495 "write": true, 00:30:38.495 "unmap": true, 00:30:38.495 "flush": true, 00:30:38.495 "reset": true, 00:30:38.495 "nvme_admin": false, 00:30:38.495 "nvme_io": false, 00:30:38.495 "nvme_io_md": false, 00:30:38.495 "write_zeroes": true, 00:30:38.495 "zcopy": true, 00:30:38.495 "get_zone_info": false, 00:30:38.495 "zone_management": false, 00:30:38.495 "zone_append": false, 00:30:38.495 "compare": false, 00:30:38.495 "compare_and_write": false, 00:30:38.495 "abort": true, 00:30:38.495 "seek_hole": false, 00:30:38.495 "seek_data": false, 00:30:38.495 "copy": true, 00:30:38.495 "nvme_iov_md": false 00:30:38.495 }, 00:30:38.495 "memory_domains": [ 00:30:38.495 { 00:30:38.495 "dma_device_id": "system", 00:30:38.495 "dma_device_type": 1 00:30:38.495 }, 00:30:38.495 { 00:30:38.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.495 "dma_device_type": 2 00:30:38.496 } 00:30:38.496 ], 00:30:38.496 "driver_specific": { 00:30:38.496 "passthru": { 00:30:38.496 "name": "pt1", 00:30:38.496 "base_bdev_name": "malloc1" 00:30:38.496 } 00:30:38.496 } 00:30:38.496 }' 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:38.496 15:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:38.753 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:38.753 "name": "pt2", 00:30:38.753 "aliases": [ 00:30:38.753 "00000000-0000-0000-0000-000000000002" 00:30:38.753 ], 00:30:38.753 "product_name": "passthru", 00:30:38.753 "block_size": 4096, 00:30:38.753 "num_blocks": 8192, 00:30:38.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:38.753 "assigned_rate_limits": { 00:30:38.753 "rw_ios_per_sec": 0, 00:30:38.753 "rw_mbytes_per_sec": 0, 00:30:38.753 "r_mbytes_per_sec": 0, 00:30:38.753 "w_mbytes_per_sec": 0 00:30:38.753 }, 00:30:38.753 "claimed": true, 00:30:38.753 "claim_type": "exclusive_write", 00:30:38.753 "zoned": false, 00:30:38.753 "supported_io_types": { 00:30:38.753 "read": true, 00:30:38.753 "write": true, 00:30:38.753 "unmap": true, 00:30:38.753 "flush": true, 00:30:38.753 "reset": true, 00:30:38.753 "nvme_admin": false, 00:30:38.753 "nvme_io": false, 00:30:38.753 "nvme_io_md": false, 00:30:38.753 "write_zeroes": true, 00:30:38.753 "zcopy": true, 00:30:38.753 "get_zone_info": false, 00:30:38.753 "zone_management": false, 00:30:38.753 "zone_append": false, 00:30:38.753 "compare": false, 00:30:38.753 "compare_and_write": false, 00:30:38.753 "abort": true, 00:30:38.753 "seek_hole": false, 00:30:38.753 "seek_data": false, 00:30:38.753 "copy": true, 00:30:38.753 "nvme_iov_md": false 00:30:38.753 }, 00:30:38.753 "memory_domains": [ 00:30:38.753 { 00:30:38.753 "dma_device_id": "system", 00:30:38.753 "dma_device_type": 1 00:30:38.753 }, 00:30:38.753 { 00:30:38.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.753 "dma_device_type": 2 00:30:38.753 } 00:30:38.753 ], 00:30:38.753 "driver_specific": { 00:30:38.753 "passthru": { 00:30:38.753 "name": "pt2", 00:30:38.753 "base_bdev_name": "malloc2" 00:30:38.753 } 00:30:38.753 } 00:30:38.753 }' 00:30:38.753 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:39.011 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:30:39.270 [2024-07-23 15:24:34.446625] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:39.270 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 7c7d43fe-c812-4051-9fde-b1d493d59070 '!=' 7c7d43fe-c812-4051-9fde-b1d493d59070 ']' 00:30:39.270 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:30:39.270 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:39.270 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:30:39.270 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:39.529 [2024-07-23 15:24:34.714483] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.529 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.788 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.788 "name": "raid_bdev1", 00:30:39.788 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:39.788 "strip_size_kb": 0, 00:30:39.788 "state": "online", 00:30:39.788 "raid_level": "raid1", 00:30:39.788 "superblock": true, 00:30:39.788 "num_base_bdevs": 2, 00:30:39.788 "num_base_bdevs_discovered": 1, 00:30:39.788 "num_base_bdevs_operational": 1, 00:30:39.788 "base_bdevs_list": [ 00:30:39.788 { 00:30:39.788 "name": null, 00:30:39.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.788 "is_configured": false, 00:30:39.788 "data_offset": 256, 00:30:39.788 "data_size": 7936 00:30:39.788 }, 00:30:39.788 { 00:30:39.788 "name": "pt2", 00:30:39.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:39.788 "is_configured": true, 00:30:39.788 "data_offset": 256, 00:30:39.788 "data_size": 7936 00:30:39.788 } 00:30:39.788 ] 00:30:39.788 }' 00:30:39.788 15:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.788 15:24:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:40.046 15:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:40.304 [2024-07-23 15:24:35.490578] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:40.304 [2024-07-23 15:24:35.490845] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:40.304 [2024-07-23 15:24:35.490999] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:40.304 [2024-07-23 15:24:35.491088] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:40.304 [2024-07-23 15:24:35.491332] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:30:40.304 15:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:30:40.304 15:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.562 15:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:30:40.562 15:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:30:40.562 15:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:30:40.562 15:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:30:40.562 15:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:40.822 [2024-07-23 15:24:36.178696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:40.822 [2024-07-23 15:24:36.179013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:40.822 [2024-07-23 15:24:36.179077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:30:40.822 [2024-07-23 15:24:36.179163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:40.822 [2024-07-23 15:24:36.181705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:40.822 [2024-07-23 15:24:36.181881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:40.822 [2024-07-23 15:24:36.182080] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:40.822 [2024-07-23 15:24:36.182231] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:40.822 [2024-07-23 15:24:36.182379] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:30:40.822 [2024-07-23 15:24:36.182474] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:40.822 [2024-07-23 15:24:36.182589] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:30:40.822 [2024-07-23 15:24:36.182986] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:30:40.822 [2024-07-23 15:24:36.183110] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:30:40.822 [2024-07-23 15:24:36.183370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:40.822 pt2 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.822 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.081 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.081 "name": "raid_bdev1", 00:30:41.081 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:41.081 "strip_size_kb": 0, 00:30:41.081 "state": "online", 00:30:41.081 "raid_level": "raid1", 00:30:41.081 "superblock": true, 00:30:41.081 "num_base_bdevs": 2, 00:30:41.081 "num_base_bdevs_discovered": 1, 00:30:41.081 "num_base_bdevs_operational": 1, 00:30:41.081 "base_bdevs_list": [ 00:30:41.081 { 00:30:41.081 "name": null, 00:30:41.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.081 "is_configured": false, 00:30:41.081 "data_offset": 256, 00:30:41.081 "data_size": 7936 00:30:41.081 }, 00:30:41.081 { 00:30:41.081 "name": "pt2", 00:30:41.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:41.081 "is_configured": true, 00:30:41.081 "data_offset": 256, 00:30:41.081 "data_size": 7936 00:30:41.081 } 00:30:41.081 ] 00:30:41.081 }' 00:30:41.081 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.081 15:24:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:41.340 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:41.612 [2024-07-23 15:24:36.891544] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:41.612 [2024-07-23 15:24:36.891764] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:41.612 [2024-07-23 15:24:36.891877] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:41.612 [2024-07-23 15:24:36.891934] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:41.612 [2024-07-23 15:24:36.891950] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:30:41.612 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.612 15:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:30:41.883 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:30:41.883 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:30:41.883 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:30:41.883 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:42.142 [2024-07-23 15:24:37.319614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:42.142 [2024-07-23 15:24:37.320364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:42.142 [2024-07-23 15:24:37.320404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:30:42.142 [2024-07-23 15:24:37.320421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:42.142 [2024-07-23 15:24:37.322915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:42.142 [2024-07-23 15:24:37.322963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:42.142 [2024-07-23 15:24:37.323038] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:42.142 [2024-07-23 15:24:37.323084] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:42.142 [2024-07-23 15:24:37.323210] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:30:42.142 [2024-07-23 15:24:37.323228] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:42.142 [2024-07-23 15:24:37.323244] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state configuring 00:30:42.142 [2024-07-23 15:24:37.323300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:42.142 [2024-07-23 15:24:37.323382] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:30:42.142 [2024-07-23 15:24:37.323395] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:42.142 [2024-07-23 15:24:37.323471] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:30:42.142 pt1 00:30:42.142 [2024-07-23 15:24:37.323783] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:30:42.142 [2024-07-23 15:24:37.323813] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:30:42.142 [2024-07-23 15:24:37.323952] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:42.142 "name": "raid_bdev1", 00:30:42.142 "uuid": "7c7d43fe-c812-4051-9fde-b1d493d59070", 00:30:42.142 "strip_size_kb": 0, 00:30:42.142 "state": "online", 00:30:42.142 "raid_level": "raid1", 00:30:42.142 "superblock": true, 00:30:42.142 "num_base_bdevs": 2, 00:30:42.142 "num_base_bdevs_discovered": 1, 00:30:42.142 "num_base_bdevs_operational": 1, 00:30:42.142 "base_bdevs_list": [ 00:30:42.142 { 00:30:42.142 "name": null, 00:30:42.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.142 "is_configured": false, 00:30:42.142 "data_offset": 256, 00:30:42.142 "data_size": 7936 00:30:42.142 }, 00:30:42.142 { 00:30:42.142 "name": "pt2", 00:30:42.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:42.142 "is_configured": true, 00:30:42.142 "data_offset": 256, 00:30:42.142 "data_size": 7936 00:30:42.142 } 00:30:42.142 ] 00:30:42.142 }' 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:42.142 15:24:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:42.401 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:30:42.401 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:42.660 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:30:42.660 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:30:42.660 15:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:42.919 [2024-07-23 15:24:38.220376] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 7c7d43fe-c812-4051-9fde-b1d493d59070 '!=' 7c7d43fe-c812-4051-9fde-b1d493d59070 ']' 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 120571 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 120571 ']' 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 120571 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120571 00:30:42.919 killing process with pid 120571 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120571' 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 120571 00:30:42.919 [2024-07-23 15:24:38.283264] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:42.919 [2024-07-23 15:24:38.283349] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:42.919 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 120571 00:30:42.919 [2024-07-23 15:24:38.283411] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:42.919 [2024-07-23 15:24:38.283423] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:30:42.919 [2024-07-23 15:24:38.307404] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:43.179 15:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:30:43.179 00:30:43.179 real 0m11.749s 00:30:43.179 user 0m20.100s 00:30:43.179 sys 0m2.567s 00:30:43.179 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.179 ************************************ 00:30:43.179 END TEST raid_superblock_test_4k 00:30:43.179 15:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:43.179 ************************************ 00:30:43.179 15:24:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:43.179 15:24:38 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:30:43.179 15:24:38 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:30:43.179 15:24:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:43.179 15:24:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.179 15:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:43.439 ************************************ 00:30:43.439 START TEST raid_rebuild_test_sb_4k 00:30:43.439 ************************************ 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=121022 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 121022 /var/tmp/spdk-raid.sock 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 121022 ']' 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.439 15:24:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:43.439 [2024-07-23 15:24:38.696079] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:30:43.439 [2024-07-23 15:24:38.696960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121022 ] 00:30:43.439 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:43.439 Zero copy mechanism will not be used. 00:30:43.439 [2024-07-23 15:24:38.850546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.698 [2024-07-23 15:24:38.901675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.698 [2024-07-23 15:24:38.947321] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:44.262 15:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:44.262 15:24:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:30:44.262 15:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:44.262 15:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:30:44.520 BaseBdev1_malloc 00:30:44.520 15:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:44.778 [2024-07-23 15:24:39.959158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:44.778 [2024-07-23 15:24:39.959244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.778 [2024-07-23 15:24:39.959294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:30:44.778 [2024-07-23 15:24:39.959313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.778 [2024-07-23 15:24:39.961886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.778 [2024-07-23 15:24:39.961930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:44.778 BaseBdev1 00:30:44.778 15:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:44.778 15:24:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:30:45.037 BaseBdev2_malloc 00:30:45.037 15:24:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:45.037 [2024-07-23 15:24:40.400859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:45.037 [2024-07-23 15:24:40.400945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:45.037 [2024-07-23 15:24:40.400976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:30:45.037 [2024-07-23 15:24:40.400988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:45.037 [2024-07-23 15:24:40.403486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:45.037 [2024-07-23 15:24:40.403527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:45.037 BaseBdev2 00:30:45.037 15:24:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:30:45.294 spare_malloc 00:30:45.294 15:24:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:45.552 spare_delay 00:30:45.552 15:24:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:45.552 [2024-07-23 15:24:40.982426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:45.552 [2024-07-23 15:24:40.982513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:45.552 [2024-07-23 15:24:40.982550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:30:45.552 [2024-07-23 15:24:40.982563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:45.810 [2024-07-23 15:24:40.985860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:45.810 [2024-07-23 15:24:40.985900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:45.810 spare 00:30:45.810 15:24:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:45.810 [2024-07-23 15:24:41.162571] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:45.810 [2024-07-23 15:24:41.164839] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:45.810 [2024-07-23 15:24:41.165023] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:30:45.810 [2024-07-23 15:24:41.165043] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:45.810 [2024-07-23 15:24:41.165187] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:30:45.810 [2024-07-23 15:24:41.165523] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:30:45.810 [2024-07-23 15:24:41.165539] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:30:45.810 [2024-07-23 15:24:41.165669] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.810 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.069 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:46.069 "name": "raid_bdev1", 00:30:46.069 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:46.069 "strip_size_kb": 0, 00:30:46.069 "state": "online", 00:30:46.069 "raid_level": "raid1", 00:30:46.069 "superblock": true, 00:30:46.069 "num_base_bdevs": 2, 00:30:46.069 "num_base_bdevs_discovered": 2, 00:30:46.069 "num_base_bdevs_operational": 2, 00:30:46.069 "base_bdevs_list": [ 00:30:46.069 { 00:30:46.069 "name": "BaseBdev1", 00:30:46.069 "uuid": "f1b06817-7a5f-5531-b771-5d49c7d330c0", 00:30:46.069 "is_configured": true, 00:30:46.069 "data_offset": 256, 00:30:46.069 "data_size": 7936 00:30:46.069 }, 00:30:46.069 { 00:30:46.069 "name": "BaseBdev2", 00:30:46.069 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:46.069 "is_configured": true, 00:30:46.069 "data_offset": 256, 00:30:46.069 "data_size": 7936 00:30:46.069 } 00:30:46.069 ] 00:30:46.069 }' 00:30:46.069 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:46.069 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:46.327 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:46.327 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:46.585 [2024-07-23 15:24:41.918928] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:46.585 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:30:46.585 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.585 15:24:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:46.844 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:47.102 [2024-07-23 15:24:42.278825] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:30:47.102 /dev/nbd0 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:47.102 1+0 records in 00:30:47.102 1+0 records out 00:30:47.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289049 s, 14.2 MB/s 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:47.102 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:47.103 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:30:47.103 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:47.103 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:47.103 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:30:47.103 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:30:47.103 15:24:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:30:47.670 7936+0 records in 00:30:47.670 7936+0 records out 00:30:47.670 32505856 bytes (33 MB, 31 MiB) copied, 0.699679 s, 46.5 MB/s 00:30:47.670 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:47.670 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:47.670 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:47.670 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:47.670 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:30:47.670 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:47.670 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:47.929 [2024-07-23 15:24:43.287694] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:30:47.929 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:48.187 [2024-07-23 15:24:43.531929] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.187 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.446 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:48.446 "name": "raid_bdev1", 00:30:48.446 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:48.446 "strip_size_kb": 0, 00:30:48.446 "state": "online", 00:30:48.446 "raid_level": "raid1", 00:30:48.446 "superblock": true, 00:30:48.446 "num_base_bdevs": 2, 00:30:48.446 "num_base_bdevs_discovered": 1, 00:30:48.446 "num_base_bdevs_operational": 1, 00:30:48.446 "base_bdevs_list": [ 00:30:48.446 { 00:30:48.446 "name": null, 00:30:48.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.446 "is_configured": false, 00:30:48.446 "data_offset": 256, 00:30:48.446 "data_size": 7936 00:30:48.446 }, 00:30:48.446 { 00:30:48.446 "name": "BaseBdev2", 00:30:48.446 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:48.446 "is_configured": true, 00:30:48.446 "data_offset": 256, 00:30:48.446 "data_size": 7936 00:30:48.446 } 00:30:48.446 ] 00:30:48.446 }' 00:30:48.446 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:48.446 15:24:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:48.705 15:24:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:48.964 [2024-07-23 15:24:44.320089] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:48.964 [2024-07-23 15:24:44.324581] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00019c550 00:30:48.964 [2024-07-23 15:24:44.326784] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:48.964 15:24:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:50.383 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:50.383 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:50.383 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:50.383 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:50.383 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:50.383 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.383 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.383 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:50.383 "name": "raid_bdev1", 00:30:50.383 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:50.383 "strip_size_kb": 0, 00:30:50.383 "state": "online", 00:30:50.383 "raid_level": "raid1", 00:30:50.384 "superblock": true, 00:30:50.384 "num_base_bdevs": 2, 00:30:50.384 "num_base_bdevs_discovered": 2, 00:30:50.384 "num_base_bdevs_operational": 2, 00:30:50.384 "process": { 00:30:50.384 "type": "rebuild", 00:30:50.384 "target": "spare", 00:30:50.384 "progress": { 00:30:50.384 "blocks": 3072, 00:30:50.384 "percent": 38 00:30:50.384 } 00:30:50.384 }, 00:30:50.384 "base_bdevs_list": [ 00:30:50.384 { 00:30:50.384 "name": "spare", 00:30:50.384 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:50.384 "is_configured": true, 00:30:50.384 "data_offset": 256, 00:30:50.384 "data_size": 7936 00:30:50.384 }, 00:30:50.384 { 00:30:50.384 "name": "BaseBdev2", 00:30:50.384 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:50.384 "is_configured": true, 00:30:50.384 "data_offset": 256, 00:30:50.384 "data_size": 7936 00:30:50.384 } 00:30:50.384 ] 00:30:50.384 }' 00:30:50.384 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:50.384 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:50.384 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:50.384 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:50.384 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:50.643 [2024-07-23 15:24:45.834114] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:50.643 [2024-07-23 15:24:45.836519] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:50.643 [2024-07-23 15:24:45.836580] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:50.643 [2024-07-23 15:24:45.836603] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:50.643 [2024-07-23 15:24:45.836622] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.643 15:24:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.643 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:50.643 "name": "raid_bdev1", 00:30:50.643 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:50.643 "strip_size_kb": 0, 00:30:50.643 "state": "online", 00:30:50.643 "raid_level": "raid1", 00:30:50.643 "superblock": true, 00:30:50.643 "num_base_bdevs": 2, 00:30:50.643 "num_base_bdevs_discovered": 1, 00:30:50.643 "num_base_bdevs_operational": 1, 00:30:50.643 "base_bdevs_list": [ 00:30:50.643 { 00:30:50.643 "name": null, 00:30:50.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.643 "is_configured": false, 00:30:50.643 "data_offset": 256, 00:30:50.643 "data_size": 7936 00:30:50.643 }, 00:30:50.643 { 00:30:50.643 "name": "BaseBdev2", 00:30:50.643 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:50.643 "is_configured": true, 00:30:50.643 "data_offset": 256, 00:30:50.643 "data_size": 7936 00:30:50.643 } 00:30:50.643 ] 00:30:50.643 }' 00:30:50.643 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:50.643 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:50.902 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:50.902 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:50.902 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:50.902 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:50.902 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:51.161 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.161 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.161 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:51.161 "name": "raid_bdev1", 00:30:51.161 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:51.161 "strip_size_kb": 0, 00:30:51.161 "state": "online", 00:30:51.161 "raid_level": "raid1", 00:30:51.161 "superblock": true, 00:30:51.161 "num_base_bdevs": 2, 00:30:51.161 "num_base_bdevs_discovered": 1, 00:30:51.161 "num_base_bdevs_operational": 1, 00:30:51.161 "base_bdevs_list": [ 00:30:51.161 { 00:30:51.161 "name": null, 00:30:51.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.161 "is_configured": false, 00:30:51.161 "data_offset": 256, 00:30:51.161 "data_size": 7936 00:30:51.161 }, 00:30:51.161 { 00:30:51.161 "name": "BaseBdev2", 00:30:51.161 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:51.161 "is_configured": true, 00:30:51.161 "data_offset": 256, 00:30:51.161 "data_size": 7936 00:30:51.161 } 00:30:51.161 ] 00:30:51.161 }' 00:30:51.161 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:51.420 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:51.420 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:51.420 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:51.420 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:51.679 [2024-07-23 15:24:46.857870] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:51.679 [2024-07-23 15:24:46.862360] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00019c620 00:30:51.679 [2024-07-23 15:24:46.864603] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:51.679 15:24:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:52.615 15:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:52.615 15:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:52.615 15:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:52.615 15:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:52.615 15:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:52.615 15:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.615 15:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.875 "name": "raid_bdev1", 00:30:52.875 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:52.875 "strip_size_kb": 0, 00:30:52.875 "state": "online", 00:30:52.875 "raid_level": "raid1", 00:30:52.875 "superblock": true, 00:30:52.875 "num_base_bdevs": 2, 00:30:52.875 "num_base_bdevs_discovered": 2, 00:30:52.875 "num_base_bdevs_operational": 2, 00:30:52.875 "process": { 00:30:52.875 "type": "rebuild", 00:30:52.875 "target": "spare", 00:30:52.875 "progress": { 00:30:52.875 "blocks": 3072, 00:30:52.875 "percent": 38 00:30:52.875 } 00:30:52.875 }, 00:30:52.875 "base_bdevs_list": [ 00:30:52.875 { 00:30:52.875 "name": "spare", 00:30:52.875 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:52.875 "is_configured": true, 00:30:52.875 "data_offset": 256, 00:30:52.875 "data_size": 7936 00:30:52.875 }, 00:30:52.875 { 00:30:52.875 "name": "BaseBdev2", 00:30:52.875 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:52.875 "is_configured": true, 00:30:52.875 "data_offset": 256, 00:30:52.875 "data_size": 7936 00:30:52.875 } 00:30:52.875 ] 00:30:52.875 }' 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:52.875 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1008 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.875 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.134 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:53.134 "name": "raid_bdev1", 00:30:53.134 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:53.134 "strip_size_kb": 0, 00:30:53.134 "state": "online", 00:30:53.134 "raid_level": "raid1", 00:30:53.134 "superblock": true, 00:30:53.134 "num_base_bdevs": 2, 00:30:53.134 "num_base_bdevs_discovered": 2, 00:30:53.134 "num_base_bdevs_operational": 2, 00:30:53.134 "process": { 00:30:53.134 "type": "rebuild", 00:30:53.134 "target": "spare", 00:30:53.134 "progress": { 00:30:53.134 "blocks": 3584, 00:30:53.134 "percent": 45 00:30:53.134 } 00:30:53.134 }, 00:30:53.134 "base_bdevs_list": [ 00:30:53.134 { 00:30:53.134 "name": "spare", 00:30:53.134 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:53.134 "is_configured": true, 00:30:53.134 "data_offset": 256, 00:30:53.134 "data_size": 7936 00:30:53.134 }, 00:30:53.134 { 00:30:53.134 "name": "BaseBdev2", 00:30:53.134 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:53.134 "is_configured": true, 00:30:53.134 "data_offset": 256, 00:30:53.134 "data_size": 7936 00:30:53.134 } 00:30:53.134 ] 00:30:53.134 }' 00:30:53.134 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:53.134 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:53.134 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:53.134 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:53.134 15:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:54.071 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:54.071 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:54.071 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:54.071 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:54.071 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:54.071 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:54.071 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.071 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.330 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:54.330 "name": "raid_bdev1", 00:30:54.330 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:54.330 "strip_size_kb": 0, 00:30:54.330 "state": "online", 00:30:54.330 "raid_level": "raid1", 00:30:54.330 "superblock": true, 00:30:54.330 "num_base_bdevs": 2, 00:30:54.330 "num_base_bdevs_discovered": 2, 00:30:54.330 "num_base_bdevs_operational": 2, 00:30:54.330 "process": { 00:30:54.330 "type": "rebuild", 00:30:54.330 "target": "spare", 00:30:54.330 "progress": { 00:30:54.330 "blocks": 6912, 00:30:54.330 "percent": 87 00:30:54.330 } 00:30:54.330 }, 00:30:54.330 "base_bdevs_list": [ 00:30:54.330 { 00:30:54.330 "name": "spare", 00:30:54.330 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:54.330 "is_configured": true, 00:30:54.330 "data_offset": 256, 00:30:54.330 "data_size": 7936 00:30:54.330 }, 00:30:54.330 { 00:30:54.330 "name": "BaseBdev2", 00:30:54.330 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:54.330 "is_configured": true, 00:30:54.330 "data_offset": 256, 00:30:54.330 "data_size": 7936 00:30:54.330 } 00:30:54.330 ] 00:30:54.330 }' 00:30:54.330 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:54.330 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:54.330 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:54.330 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:54.330 15:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:54.590 [2024-07-23 15:24:49.982914] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:54.590 [2024-07-23 15:24:49.983033] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:54.590 [2024-07-23 15:24:49.983156] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:55.527 "name": "raid_bdev1", 00:30:55.527 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:55.527 "strip_size_kb": 0, 00:30:55.527 "state": "online", 00:30:55.527 "raid_level": "raid1", 00:30:55.527 "superblock": true, 00:30:55.527 "num_base_bdevs": 2, 00:30:55.527 "num_base_bdevs_discovered": 2, 00:30:55.527 "num_base_bdevs_operational": 2, 00:30:55.527 "base_bdevs_list": [ 00:30:55.527 { 00:30:55.527 "name": "spare", 00:30:55.527 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:55.527 "is_configured": true, 00:30:55.527 "data_offset": 256, 00:30:55.527 "data_size": 7936 00:30:55.527 }, 00:30:55.527 { 00:30:55.527 "name": "BaseBdev2", 00:30:55.527 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:55.527 "is_configured": true, 00:30:55.527 "data_offset": 256, 00:30:55.527 "data_size": 7936 00:30:55.527 } 00:30:55.527 ] 00:30:55.527 }' 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.527 15:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:55.787 "name": "raid_bdev1", 00:30:55.787 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:55.787 "strip_size_kb": 0, 00:30:55.787 "state": "online", 00:30:55.787 "raid_level": "raid1", 00:30:55.787 "superblock": true, 00:30:55.787 "num_base_bdevs": 2, 00:30:55.787 "num_base_bdevs_discovered": 2, 00:30:55.787 "num_base_bdevs_operational": 2, 00:30:55.787 "base_bdevs_list": [ 00:30:55.787 { 00:30:55.787 "name": "spare", 00:30:55.787 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:55.787 "is_configured": true, 00:30:55.787 "data_offset": 256, 00:30:55.787 "data_size": 7936 00:30:55.787 }, 00:30:55.787 { 00:30:55.787 "name": "BaseBdev2", 00:30:55.787 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:55.787 "is_configured": true, 00:30:55.787 "data_offset": 256, 00:30:55.787 "data_size": 7936 00:30:55.787 } 00:30:55.787 ] 00:30:55.787 }' 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.787 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.045 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:56.045 "name": "raid_bdev1", 00:30:56.045 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:56.045 "strip_size_kb": 0, 00:30:56.045 "state": "online", 00:30:56.045 "raid_level": "raid1", 00:30:56.045 "superblock": true, 00:30:56.045 "num_base_bdevs": 2, 00:30:56.045 "num_base_bdevs_discovered": 2, 00:30:56.045 "num_base_bdevs_operational": 2, 00:30:56.045 "base_bdevs_list": [ 00:30:56.045 { 00:30:56.045 "name": "spare", 00:30:56.045 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:56.045 "is_configured": true, 00:30:56.045 "data_offset": 256, 00:30:56.045 "data_size": 7936 00:30:56.045 }, 00:30:56.045 { 00:30:56.045 "name": "BaseBdev2", 00:30:56.045 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:56.045 "is_configured": true, 00:30:56.045 "data_offset": 256, 00:30:56.045 "data_size": 7936 00:30:56.045 } 00:30:56.045 ] 00:30:56.045 }' 00:30:56.045 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:56.045 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:56.303 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:56.562 [2024-07-23 15:24:51.936425] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:56.562 [2024-07-23 15:24:51.936469] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:56.562 [2024-07-23 15:24:51.936567] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:56.562 [2024-07-23 15:24:51.936637] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:56.562 [2024-07-23 15:24:51.936666] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:30:56.562 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:30:56.562 15:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:56.820 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:57.079 /dev/nbd0 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:57.079 1+0 records in 00:30:57.079 1+0 records out 00:30:57.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255495 s, 16.0 MB/s 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:57.079 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:57.338 /dev/nbd1 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:57.338 1+0 records in 00:30:57.338 1+0 records out 00:30:57.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273051 s, 15.0 MB/s 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:57.338 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:57.597 15:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:57.857 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:58.119 [2024-07-23 15:24:53.487463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:58.119 [2024-07-23 15:24:53.487566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.119 [2024-07-23 15:24:53.487596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:30:58.119 [2024-07-23 15:24:53.487612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.119 [2024-07-23 15:24:53.490128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.119 [2024-07-23 15:24:53.490179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:58.119 [2024-07-23 15:24:53.490262] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:58.119 [2024-07-23 15:24:53.490313] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:58.119 [2024-07-23 15:24:53.490459] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:58.119 spare 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.119 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.379 [2024-07-23 15:24:53.590560] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:30:58.379 [2024-07-23 15:24:53.590614] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:58.379 [2024-07-23 15:24:53.590770] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001bada0 00:30:58.379 [2024-07-23 15:24:53.591143] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:30:58.379 [2024-07-23 15:24:53.591171] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:30:58.379 [2024-07-23 15:24:53.591300] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.379 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.380 "name": "raid_bdev1", 00:30:58.380 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:58.380 "strip_size_kb": 0, 00:30:58.380 "state": "online", 00:30:58.380 "raid_level": "raid1", 00:30:58.380 "superblock": true, 00:30:58.380 "num_base_bdevs": 2, 00:30:58.380 "num_base_bdevs_discovered": 2, 00:30:58.380 "num_base_bdevs_operational": 2, 00:30:58.380 "base_bdevs_list": [ 00:30:58.380 { 00:30:58.380 "name": "spare", 00:30:58.380 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:58.380 "is_configured": true, 00:30:58.380 "data_offset": 256, 00:30:58.380 "data_size": 7936 00:30:58.380 }, 00:30:58.380 { 00:30:58.380 "name": "BaseBdev2", 00:30:58.380 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:58.380 "is_configured": true, 00:30:58.380 "data_offset": 256, 00:30:58.380 "data_size": 7936 00:30:58.380 } 00:30:58.380 ] 00:30:58.380 }' 00:30:58.380 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.380 15:24:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:58.978 "name": "raid_bdev1", 00:30:58.978 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:58.978 "strip_size_kb": 0, 00:30:58.978 "state": "online", 00:30:58.978 "raid_level": "raid1", 00:30:58.978 "superblock": true, 00:30:58.978 "num_base_bdevs": 2, 00:30:58.978 "num_base_bdevs_discovered": 2, 00:30:58.978 "num_base_bdevs_operational": 2, 00:30:58.978 "base_bdevs_list": [ 00:30:58.978 { 00:30:58.978 "name": "spare", 00:30:58.978 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:30:58.978 "is_configured": true, 00:30:58.978 "data_offset": 256, 00:30:58.978 "data_size": 7936 00:30:58.978 }, 00:30:58.978 { 00:30:58.978 "name": "BaseBdev2", 00:30:58.978 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:58.978 "is_configured": true, 00:30:58.978 "data_offset": 256, 00:30:58.978 "data_size": 7936 00:30:58.978 } 00:30:58.978 ] 00:30:58.978 }' 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.978 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:59.237 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:59.237 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:59.496 [2024-07-23 15:24:54.835829] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.496 15:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.755 15:24:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.755 "name": "raid_bdev1", 00:30:59.755 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:30:59.755 "strip_size_kb": 0, 00:30:59.755 "state": "online", 00:30:59.755 "raid_level": "raid1", 00:30:59.755 "superblock": true, 00:30:59.755 "num_base_bdevs": 2, 00:30:59.755 "num_base_bdevs_discovered": 1, 00:30:59.755 "num_base_bdevs_operational": 1, 00:30:59.755 "base_bdevs_list": [ 00:30:59.755 { 00:30:59.755 "name": null, 00:30:59.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.755 "is_configured": false, 00:30:59.755 "data_offset": 256, 00:30:59.755 "data_size": 7936 00:30:59.755 }, 00:30:59.755 { 00:30:59.755 "name": "BaseBdev2", 00:30:59.755 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:30:59.755 "is_configured": true, 00:30:59.755 "data_offset": 256, 00:30:59.755 "data_size": 7936 00:30:59.755 } 00:30:59.755 ] 00:30:59.755 }' 00:30:59.755 15:24:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.755 15:24:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:00.014 15:24:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:00.273 [2024-07-23 15:24:55.640001] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:00.273 [2024-07-23 15:24:55.640212] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:00.273 [2024-07-23 15:24:55.640231] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:00.273 [2024-07-23 15:24:55.640274] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:00.273 [2024-07-23 15:24:55.644556] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001bae70 00:31:00.273 [2024-07-23 15:24:55.646727] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:00.273 15:24:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:01.651 "name": "raid_bdev1", 00:31:01.651 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:01.651 "strip_size_kb": 0, 00:31:01.651 "state": "online", 00:31:01.651 "raid_level": "raid1", 00:31:01.651 "superblock": true, 00:31:01.651 "num_base_bdevs": 2, 00:31:01.651 "num_base_bdevs_discovered": 2, 00:31:01.651 "num_base_bdevs_operational": 2, 00:31:01.651 "process": { 00:31:01.651 "type": "rebuild", 00:31:01.651 "target": "spare", 00:31:01.651 "progress": { 00:31:01.651 "blocks": 3072, 00:31:01.651 "percent": 38 00:31:01.651 } 00:31:01.651 }, 00:31:01.651 "base_bdevs_list": [ 00:31:01.651 { 00:31:01.651 "name": "spare", 00:31:01.651 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:31:01.651 "is_configured": true, 00:31:01.651 "data_offset": 256, 00:31:01.651 "data_size": 7936 00:31:01.651 }, 00:31:01.651 { 00:31:01.651 "name": "BaseBdev2", 00:31:01.651 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:01.651 "is_configured": true, 00:31:01.651 "data_offset": 256, 00:31:01.651 "data_size": 7936 00:31:01.651 } 00:31:01.651 ] 00:31:01.651 }' 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:01.651 15:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:01.910 [2024-07-23 15:24:57.157940] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:01.910 [2024-07-23 15:24:57.256174] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:01.910 [2024-07-23 15:24:57.256240] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.910 [2024-07-23 15:24:57.256259] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:01.910 [2024-07-23 15:24:57.256268] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.910 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.168 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:02.168 "name": "raid_bdev1", 00:31:02.168 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:02.168 "strip_size_kb": 0, 00:31:02.168 "state": "online", 00:31:02.168 "raid_level": "raid1", 00:31:02.168 "superblock": true, 00:31:02.168 "num_base_bdevs": 2, 00:31:02.168 "num_base_bdevs_discovered": 1, 00:31:02.168 "num_base_bdevs_operational": 1, 00:31:02.168 "base_bdevs_list": [ 00:31:02.168 { 00:31:02.168 "name": null, 00:31:02.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.168 "is_configured": false, 00:31:02.168 "data_offset": 256, 00:31:02.168 "data_size": 7936 00:31:02.168 }, 00:31:02.168 { 00:31:02.168 "name": "BaseBdev2", 00:31:02.168 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:02.168 "is_configured": true, 00:31:02.168 "data_offset": 256, 00:31:02.168 "data_size": 7936 00:31:02.168 } 00:31:02.168 ] 00:31:02.168 }' 00:31:02.168 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:02.168 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:02.736 15:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:02.736 [2024-07-23 15:24:58.105216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:02.736 [2024-07-23 15:24:58.105296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.736 [2024-07-23 15:24:58.105333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:31:02.736 [2024-07-23 15:24:58.105346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.736 [2024-07-23 15:24:58.105806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.736 [2024-07-23 15:24:58.105832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:02.736 [2024-07-23 15:24:58.105918] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:02.736 [2024-07-23 15:24:58.105932] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:02.736 [2024-07-23 15:24:58.105947] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:02.736 [2024-07-23 15:24:58.105988] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:02.736 [2024-07-23 15:24:58.110269] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001baf40 00:31:02.736 spare 00:31:02.736 [2024-07-23 15:24:58.112462] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:02.736 15:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:04.113 "name": "raid_bdev1", 00:31:04.113 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:04.113 "strip_size_kb": 0, 00:31:04.113 "state": "online", 00:31:04.113 "raid_level": "raid1", 00:31:04.113 "superblock": true, 00:31:04.113 "num_base_bdevs": 2, 00:31:04.113 "num_base_bdevs_discovered": 2, 00:31:04.113 "num_base_bdevs_operational": 2, 00:31:04.113 "process": { 00:31:04.113 "type": "rebuild", 00:31:04.113 "target": "spare", 00:31:04.113 "progress": { 00:31:04.113 "blocks": 3072, 00:31:04.113 "percent": 38 00:31:04.113 } 00:31:04.113 }, 00:31:04.113 "base_bdevs_list": [ 00:31:04.113 { 00:31:04.113 "name": "spare", 00:31:04.113 "uuid": "32fe231c-8b24-5b4b-a12b-942dc7d7b7e0", 00:31:04.113 "is_configured": true, 00:31:04.113 "data_offset": 256, 00:31:04.113 "data_size": 7936 00:31:04.113 }, 00:31:04.113 { 00:31:04.113 "name": "BaseBdev2", 00:31:04.113 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:04.113 "is_configured": true, 00:31:04.113 "data_offset": 256, 00:31:04.113 "data_size": 7936 00:31:04.113 } 00:31:04.113 ] 00:31:04.113 }' 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:04.113 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:04.372 [2024-07-23 15:24:59.590932] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:04.372 [2024-07-23 15:24:59.621124] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:04.372 [2024-07-23 15:24:59.621202] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:04.372 [2024-07-23 15:24:59.621220] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:04.372 [2024-07-23 15:24:59.621231] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.372 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.631 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:04.631 "name": "raid_bdev1", 00:31:04.631 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:04.631 "strip_size_kb": 0, 00:31:04.631 "state": "online", 00:31:04.631 "raid_level": "raid1", 00:31:04.631 "superblock": true, 00:31:04.631 "num_base_bdevs": 2, 00:31:04.631 "num_base_bdevs_discovered": 1, 00:31:04.631 "num_base_bdevs_operational": 1, 00:31:04.631 "base_bdevs_list": [ 00:31:04.631 { 00:31:04.631 "name": null, 00:31:04.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.631 "is_configured": false, 00:31:04.631 "data_offset": 256, 00:31:04.631 "data_size": 7936 00:31:04.631 }, 00:31:04.631 { 00:31:04.631 "name": "BaseBdev2", 00:31:04.631 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:04.631 "is_configured": true, 00:31:04.631 "data_offset": 256, 00:31:04.631 "data_size": 7936 00:31:04.631 } 00:31:04.631 ] 00:31:04.631 }' 00:31:04.631 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:04.631 15:24:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:04.892 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:04.892 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:04.892 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:04.892 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:04.892 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:04.892 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.892 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.151 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:05.151 "name": "raid_bdev1", 00:31:05.151 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:05.151 "strip_size_kb": 0, 00:31:05.151 "state": "online", 00:31:05.151 "raid_level": "raid1", 00:31:05.151 "superblock": true, 00:31:05.151 "num_base_bdevs": 2, 00:31:05.151 "num_base_bdevs_discovered": 1, 00:31:05.151 "num_base_bdevs_operational": 1, 00:31:05.151 "base_bdevs_list": [ 00:31:05.151 { 00:31:05.151 "name": null, 00:31:05.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.151 "is_configured": false, 00:31:05.151 "data_offset": 256, 00:31:05.151 "data_size": 7936 00:31:05.151 }, 00:31:05.151 { 00:31:05.151 "name": "BaseBdev2", 00:31:05.151 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:05.151 "is_configured": true, 00:31:05.151 "data_offset": 256, 00:31:05.151 "data_size": 7936 00:31:05.151 } 00:31:05.151 ] 00:31:05.151 }' 00:31:05.151 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:05.151 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:05.151 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:05.151 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:05.151 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:05.411 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:05.671 [2024-07-23 15:25:00.894460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:05.671 [2024-07-23 15:25:00.894565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.671 [2024-07-23 15:25:00.894594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:31:05.671 [2024-07-23 15:25:00.894610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.671 [2024-07-23 15:25:00.895051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.671 [2024-07-23 15:25:00.895086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:05.671 [2024-07-23 15:25:00.895165] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:05.671 [2024-07-23 15:25:00.895188] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:05.671 [2024-07-23 15:25:00.895198] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:05.671 BaseBdev1 00:31:05.671 15:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.607 15:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.865 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:06.865 "name": "raid_bdev1", 00:31:06.865 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:06.865 "strip_size_kb": 0, 00:31:06.865 "state": "online", 00:31:06.865 "raid_level": "raid1", 00:31:06.865 "superblock": true, 00:31:06.865 "num_base_bdevs": 2, 00:31:06.865 "num_base_bdevs_discovered": 1, 00:31:06.865 "num_base_bdevs_operational": 1, 00:31:06.865 "base_bdevs_list": [ 00:31:06.865 { 00:31:06.865 "name": null, 00:31:06.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.865 "is_configured": false, 00:31:06.865 "data_offset": 256, 00:31:06.865 "data_size": 7936 00:31:06.865 }, 00:31:06.865 { 00:31:06.865 "name": "BaseBdev2", 00:31:06.865 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:06.865 "is_configured": true, 00:31:06.865 "data_offset": 256, 00:31:06.865 "data_size": 7936 00:31:06.865 } 00:31:06.865 ] 00:31:06.865 }' 00:31:06.865 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:06.865 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:07.123 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:07.123 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:07.123 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:07.123 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:07.123 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:07.123 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.123 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:07.383 "name": "raid_bdev1", 00:31:07.383 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:07.383 "strip_size_kb": 0, 00:31:07.383 "state": "online", 00:31:07.383 "raid_level": "raid1", 00:31:07.383 "superblock": true, 00:31:07.383 "num_base_bdevs": 2, 00:31:07.383 "num_base_bdevs_discovered": 1, 00:31:07.383 "num_base_bdevs_operational": 1, 00:31:07.383 "base_bdevs_list": [ 00:31:07.383 { 00:31:07.383 "name": null, 00:31:07.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.383 "is_configured": false, 00:31:07.383 "data_offset": 256, 00:31:07.383 "data_size": 7936 00:31:07.383 }, 00:31:07.383 { 00:31:07.383 "name": "BaseBdev2", 00:31:07.383 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:07.383 "is_configured": true, 00:31:07.383 "data_offset": 256, 00:31:07.383 "data_size": 7936 00:31:07.383 } 00:31:07.383 ] 00:31:07.383 }' 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:07.383 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:07.656 [2024-07-23 15:25:02.930924] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:07.656 [2024-07-23 15:25:02.931107] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:07.656 [2024-07-23 15:25:02.931126] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:07.656 request: 00:31:07.656 { 00:31:07.656 "base_bdev": "BaseBdev1", 00:31:07.656 "raid_bdev": "raid_bdev1", 00:31:07.656 "method": "bdev_raid_add_base_bdev", 00:31:07.656 "req_id": 1 00:31:07.656 } 00:31:07.656 Got JSON-RPC error response 00:31:07.656 response: 00:31:07.656 { 00:31:07.656 "code": -22, 00:31:07.656 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:07.656 } 00:31:07.656 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:31:07.656 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:07.656 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:07.656 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:07.656 15:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:08.623 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:08.623 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:08.623 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:08.623 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:08.623 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:08.623 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:08.624 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:08.624 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:08.624 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:08.624 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:08.624 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.624 15:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.882 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:08.882 "name": "raid_bdev1", 00:31:08.882 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:08.882 "strip_size_kb": 0, 00:31:08.882 "state": "online", 00:31:08.882 "raid_level": "raid1", 00:31:08.882 "superblock": true, 00:31:08.882 "num_base_bdevs": 2, 00:31:08.882 "num_base_bdevs_discovered": 1, 00:31:08.882 "num_base_bdevs_operational": 1, 00:31:08.882 "base_bdevs_list": [ 00:31:08.882 { 00:31:08.882 "name": null, 00:31:08.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.882 "is_configured": false, 00:31:08.882 "data_offset": 256, 00:31:08.882 "data_size": 7936 00:31:08.882 }, 00:31:08.882 { 00:31:08.882 "name": "BaseBdev2", 00:31:08.882 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:08.882 "is_configured": true, 00:31:08.882 "data_offset": 256, 00:31:08.882 "data_size": 7936 00:31:08.882 } 00:31:08.882 ] 00:31:08.882 }' 00:31:08.882 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:08.882 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:09.140 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:09.140 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:09.140 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:09.140 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:09.140 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:09.140 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.140 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:09.399 "name": "raid_bdev1", 00:31:09.399 "uuid": "e80880fe-6b7f-4408-ad87-cdb2933940c4", 00:31:09.399 "strip_size_kb": 0, 00:31:09.399 "state": "online", 00:31:09.399 "raid_level": "raid1", 00:31:09.399 "superblock": true, 00:31:09.399 "num_base_bdevs": 2, 00:31:09.399 "num_base_bdevs_discovered": 1, 00:31:09.399 "num_base_bdevs_operational": 1, 00:31:09.399 "base_bdevs_list": [ 00:31:09.399 { 00:31:09.399 "name": null, 00:31:09.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.399 "is_configured": false, 00:31:09.399 "data_offset": 256, 00:31:09.399 "data_size": 7936 00:31:09.399 }, 00:31:09.399 { 00:31:09.399 "name": "BaseBdev2", 00:31:09.399 "uuid": "f3a48b2b-f9ce-579b-852e-27e41d2266f6", 00:31:09.399 "is_configured": true, 00:31:09.399 "data_offset": 256, 00:31:09.399 "data_size": 7936 00:31:09.399 } 00:31:09.399 ] 00:31:09.399 }' 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 121022 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 121022 ']' 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 121022 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121022 00:31:09.399 killing process with pid 121022 00:31:09.399 Received shutdown signal, test time was about 60.000000 seconds 00:31:09.399 00:31:09.399 Latency(us) 00:31:09.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.399 =================================================================================================================== 00:31:09.399 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121022' 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 121022 00:31:09.399 [2024-07-23 15:25:04.806920] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:09.399 [2024-07-23 15:25:04.807056] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:09.399 [2024-07-23 15:25:04.807104] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:09.399 [2024-07-23 15:25:04.807118] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:31:09.399 15:25:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 121022 00:31:09.657 [2024-07-23 15:25:04.839104] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:09.657 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:31:09.657 00:31:09.657 real 0m26.459s 00:31:09.657 user 0m38.732s 00:31:09.657 sys 0m4.581s 00:31:09.657 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.657 15:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:09.657 ************************************ 00:31:09.657 END TEST raid_rebuild_test_sb_4k 00:31:09.657 ************************************ 00:31:09.915 15:25:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:09.915 15:25:05 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:31:09.915 15:25:05 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:31:09.915 15:25:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:09.915 15:25:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.915 15:25:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:09.915 ************************************ 00:31:09.915 START TEST raid_state_function_test_sb_md_separate 00:31:09.915 ************************************ 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=121815 00:31:09.915 Process raid pid: 121815 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121815' 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 121815 /var/tmp/spdk-raid.sock 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 121815 ']' 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:09.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:09.915 15:25:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:09.915 [2024-07-23 15:25:05.227624] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:31:09.915 [2024-07-23 15:25:05.228137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.172 [2024-07-23 15:25:05.382076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.172 [2024-07-23 15:25:05.426885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.172 [2024-07-23 15:25:05.472216] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:10.738 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:10.738 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:31:10.738 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:10.997 [2024-07-23 15:25:06.298445] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:10.997 [2024-07-23 15:25:06.298534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:10.997 [2024-07-23 15:25:06.298546] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:10.997 [2024-07-23 15:25:06.298560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.997 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:11.256 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.256 "name": "Existed_Raid", 00:31:11.256 "uuid": "ba18086b-f27c-4f84-ad88-4675393f4b8b", 00:31:11.256 "strip_size_kb": 0, 00:31:11.256 "state": "configuring", 00:31:11.256 "raid_level": "raid1", 00:31:11.256 "superblock": true, 00:31:11.256 "num_base_bdevs": 2, 00:31:11.256 "num_base_bdevs_discovered": 0, 00:31:11.256 "num_base_bdevs_operational": 2, 00:31:11.256 "base_bdevs_list": [ 00:31:11.256 { 00:31:11.256 "name": "BaseBdev1", 00:31:11.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.256 "is_configured": false, 00:31:11.256 "data_offset": 0, 00:31:11.256 "data_size": 0 00:31:11.256 }, 00:31:11.256 { 00:31:11.256 "name": "BaseBdev2", 00:31:11.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.256 "is_configured": false, 00:31:11.256 "data_offset": 0, 00:31:11.256 "data_size": 0 00:31:11.256 } 00:31:11.256 ] 00:31:11.256 }' 00:31:11.256 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.256 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:11.514 15:25:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:11.773 [2024-07-23 15:25:07.010452] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:11.773 [2024-07-23 15:25:07.010510] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:31:11.773 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:11.773 [2024-07-23 15:25:07.186532] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:11.773 [2024-07-23 15:25:07.186598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:11.773 [2024-07-23 15:25:07.186626] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:11.773 [2024-07-23 15:25:07.186639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:11.773 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:31:12.032 BaseBdev1 00:31:12.032 [2024-07-23 15:25:07.376905] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:12.032 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:12.032 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:12.032 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:12.032 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:31:12.032 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:12.032 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:12.032 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:12.290 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:12.549 [ 00:31:12.549 { 00:31:12.549 "name": "BaseBdev1", 00:31:12.549 "aliases": [ 00:31:12.549 "51deab0f-4e61-4dc8-a06b-c2cc9d3f0f74" 00:31:12.549 ], 00:31:12.549 "product_name": "Malloc disk", 00:31:12.549 "block_size": 4096, 00:31:12.549 "num_blocks": 8192, 00:31:12.549 "uuid": "51deab0f-4e61-4dc8-a06b-c2cc9d3f0f74", 00:31:12.549 "md_size": 32, 00:31:12.549 "md_interleave": false, 00:31:12.549 "dif_type": 0, 00:31:12.549 "assigned_rate_limits": { 00:31:12.549 "rw_ios_per_sec": 0, 00:31:12.549 "rw_mbytes_per_sec": 0, 00:31:12.549 "r_mbytes_per_sec": 0, 00:31:12.549 "w_mbytes_per_sec": 0 00:31:12.549 }, 00:31:12.549 "claimed": true, 00:31:12.549 "claim_type": "exclusive_write", 00:31:12.549 "zoned": false, 00:31:12.549 "supported_io_types": { 00:31:12.549 "read": true, 00:31:12.549 "write": true, 00:31:12.549 "unmap": true, 00:31:12.549 "flush": true, 00:31:12.549 "reset": true, 00:31:12.549 "nvme_admin": false, 00:31:12.549 "nvme_io": false, 00:31:12.549 "nvme_io_md": false, 00:31:12.549 "write_zeroes": true, 00:31:12.549 "zcopy": true, 00:31:12.549 "get_zone_info": false, 00:31:12.549 "zone_management": false, 00:31:12.549 "zone_append": false, 00:31:12.549 "compare": false, 00:31:12.549 "compare_and_write": false, 00:31:12.549 "abort": true, 00:31:12.549 "seek_hole": false, 00:31:12.549 "seek_data": false, 00:31:12.549 "copy": true, 00:31:12.549 "nvme_iov_md": false 00:31:12.549 }, 00:31:12.549 "memory_domains": [ 00:31:12.549 { 00:31:12.549 "dma_device_id": "system", 00:31:12.549 "dma_device_type": 1 00:31:12.549 }, 00:31:12.549 { 00:31:12.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:12.549 "dma_device_type": 2 00:31:12.549 } 00:31:12.549 ], 00:31:12.549 "driver_specific": {} 00:31:12.549 } 00:31:12.549 ] 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:12.549 "name": "Existed_Raid", 00:31:12.549 "uuid": "5c0c793e-99e2-4800-98d5-d4c5945be08a", 00:31:12.549 "strip_size_kb": 0, 00:31:12.549 "state": "configuring", 00:31:12.549 "raid_level": "raid1", 00:31:12.549 "superblock": true, 00:31:12.549 "num_base_bdevs": 2, 00:31:12.549 "num_base_bdevs_discovered": 1, 00:31:12.549 "num_base_bdevs_operational": 2, 00:31:12.549 "base_bdevs_list": [ 00:31:12.549 { 00:31:12.549 "name": "BaseBdev1", 00:31:12.549 "uuid": "51deab0f-4e61-4dc8-a06b-c2cc9d3f0f74", 00:31:12.549 "is_configured": true, 00:31:12.549 "data_offset": 256, 00:31:12.549 "data_size": 7936 00:31:12.549 }, 00:31:12.549 { 00:31:12.549 "name": "BaseBdev2", 00:31:12.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.549 "is_configured": false, 00:31:12.549 "data_offset": 0, 00:31:12.549 "data_size": 0 00:31:12.549 } 00:31:12.549 ] 00:31:12.549 }' 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:12.549 15:25:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:12.808 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:13.067 [2024-07-23 15:25:08.353223] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:13.067 [2024-07-23 15:25:08.353463] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:31:13.067 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:13.326 [2024-07-23 15:25:08.525317] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:13.326 [2024-07-23 15:25:08.527702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:13.326 [2024-07-23 15:25:08.527758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.326 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:13.585 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.585 "name": "Existed_Raid", 00:31:13.585 "uuid": "39e80cb1-d127-4809-80f2-e18691f7ebe4", 00:31:13.585 "strip_size_kb": 0, 00:31:13.585 "state": "configuring", 00:31:13.585 "raid_level": "raid1", 00:31:13.585 "superblock": true, 00:31:13.585 "num_base_bdevs": 2, 00:31:13.585 "num_base_bdevs_discovered": 1, 00:31:13.585 "num_base_bdevs_operational": 2, 00:31:13.585 "base_bdevs_list": [ 00:31:13.585 { 00:31:13.585 "name": "BaseBdev1", 00:31:13.585 "uuid": "51deab0f-4e61-4dc8-a06b-c2cc9d3f0f74", 00:31:13.585 "is_configured": true, 00:31:13.585 "data_offset": 256, 00:31:13.585 "data_size": 7936 00:31:13.585 }, 00:31:13.585 { 00:31:13.585 "name": "BaseBdev2", 00:31:13.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.585 "is_configured": false, 00:31:13.585 "data_offset": 0, 00:31:13.585 "data_size": 0 00:31:13.585 } 00:31:13.585 ] 00:31:13.585 }' 00:31:13.585 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.585 15:25:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:13.843 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:31:14.102 [2024-07-23 15:25:09.314019] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:14.102 [2024-07-23 15:25:09.314247] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:31:14.102 [2024-07-23 15:25:09.314273] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:14.102 [2024-07-23 15:25:09.314417] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:31:14.102 [2024-07-23 15:25:09.314576] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:31:14.102 [2024-07-23 15:25:09.314595] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:31:14.102 [2024-07-23 15:25:09.314697] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:14.102 BaseBdev2 00:31:14.102 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:14.102 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:14.102 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:14.102 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:31:14.102 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:14.102 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:14.102 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:14.102 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:14.361 [ 00:31:14.361 { 00:31:14.361 "name": "BaseBdev2", 00:31:14.361 "aliases": [ 00:31:14.361 "8b875765-48e9-4329-822f-abe0eacecf87" 00:31:14.361 ], 00:31:14.361 "product_name": "Malloc disk", 00:31:14.361 "block_size": 4096, 00:31:14.361 "num_blocks": 8192, 00:31:14.361 "uuid": "8b875765-48e9-4329-822f-abe0eacecf87", 00:31:14.361 "md_size": 32, 00:31:14.361 "md_interleave": false, 00:31:14.361 "dif_type": 0, 00:31:14.361 "assigned_rate_limits": { 00:31:14.361 "rw_ios_per_sec": 0, 00:31:14.361 "rw_mbytes_per_sec": 0, 00:31:14.361 "r_mbytes_per_sec": 0, 00:31:14.361 "w_mbytes_per_sec": 0 00:31:14.361 }, 00:31:14.361 "claimed": true, 00:31:14.361 "claim_type": "exclusive_write", 00:31:14.361 "zoned": false, 00:31:14.361 "supported_io_types": { 00:31:14.361 "read": true, 00:31:14.361 "write": true, 00:31:14.361 "unmap": true, 00:31:14.361 "flush": true, 00:31:14.361 "reset": true, 00:31:14.361 "nvme_admin": false, 00:31:14.361 "nvme_io": false, 00:31:14.361 "nvme_io_md": false, 00:31:14.361 "write_zeroes": true, 00:31:14.361 "zcopy": true, 00:31:14.361 "get_zone_info": false, 00:31:14.361 "zone_management": false, 00:31:14.361 "zone_append": false, 00:31:14.361 "compare": false, 00:31:14.361 "compare_and_write": false, 00:31:14.361 "abort": true, 00:31:14.361 "seek_hole": false, 00:31:14.361 "seek_data": false, 00:31:14.361 "copy": true, 00:31:14.361 "nvme_iov_md": false 00:31:14.361 }, 00:31:14.361 "memory_domains": [ 00:31:14.361 { 00:31:14.361 "dma_device_id": "system", 00:31:14.361 "dma_device_type": 1 00:31:14.361 }, 00:31:14.361 { 00:31:14.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.361 "dma_device_type": 2 00:31:14.361 } 00:31:14.361 ], 00:31:14.361 "driver_specific": {} 00:31:14.361 } 00:31:14.361 ] 00:31:14.361 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.362 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:14.620 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:14.620 "name": "Existed_Raid", 00:31:14.620 "uuid": "39e80cb1-d127-4809-80f2-e18691f7ebe4", 00:31:14.620 "strip_size_kb": 0, 00:31:14.620 "state": "online", 00:31:14.620 "raid_level": "raid1", 00:31:14.620 "superblock": true, 00:31:14.620 "num_base_bdevs": 2, 00:31:14.621 "num_base_bdevs_discovered": 2, 00:31:14.621 "num_base_bdevs_operational": 2, 00:31:14.621 "base_bdevs_list": [ 00:31:14.621 { 00:31:14.621 "name": "BaseBdev1", 00:31:14.621 "uuid": "51deab0f-4e61-4dc8-a06b-c2cc9d3f0f74", 00:31:14.621 "is_configured": true, 00:31:14.621 "data_offset": 256, 00:31:14.621 "data_size": 7936 00:31:14.621 }, 00:31:14.621 { 00:31:14.621 "name": "BaseBdev2", 00:31:14.621 "uuid": "8b875765-48e9-4329-822f-abe0eacecf87", 00:31:14.621 "is_configured": true, 00:31:14.621 "data_offset": 256, 00:31:14.621 "data_size": 7936 00:31:14.621 } 00:31:14.621 ] 00:31:14.621 }' 00:31:14.621 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:14.621 15:25:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:14.879 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:14.879 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:14.879 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:14.879 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:14.879 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:14.879 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:31:14.879 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:14.879 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:15.138 [2024-07-23 15:25:10.494638] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:15.138 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:15.138 "name": "Existed_Raid", 00:31:15.138 "aliases": [ 00:31:15.138 "39e80cb1-d127-4809-80f2-e18691f7ebe4" 00:31:15.138 ], 00:31:15.138 "product_name": "Raid Volume", 00:31:15.138 "block_size": 4096, 00:31:15.138 "num_blocks": 7936, 00:31:15.138 "uuid": "39e80cb1-d127-4809-80f2-e18691f7ebe4", 00:31:15.138 "md_size": 32, 00:31:15.138 "md_interleave": false, 00:31:15.138 "dif_type": 0, 00:31:15.138 "assigned_rate_limits": { 00:31:15.138 "rw_ios_per_sec": 0, 00:31:15.138 "rw_mbytes_per_sec": 0, 00:31:15.138 "r_mbytes_per_sec": 0, 00:31:15.138 "w_mbytes_per_sec": 0 00:31:15.138 }, 00:31:15.138 "claimed": false, 00:31:15.138 "zoned": false, 00:31:15.138 "supported_io_types": { 00:31:15.138 "read": true, 00:31:15.138 "write": true, 00:31:15.138 "unmap": false, 00:31:15.138 "flush": false, 00:31:15.138 "reset": true, 00:31:15.138 "nvme_admin": false, 00:31:15.138 "nvme_io": false, 00:31:15.138 "nvme_io_md": false, 00:31:15.138 "write_zeroes": true, 00:31:15.138 "zcopy": false, 00:31:15.138 "get_zone_info": false, 00:31:15.139 "zone_management": false, 00:31:15.139 "zone_append": false, 00:31:15.139 "compare": false, 00:31:15.139 "compare_and_write": false, 00:31:15.139 "abort": false, 00:31:15.139 "seek_hole": false, 00:31:15.139 "seek_data": false, 00:31:15.139 "copy": false, 00:31:15.139 "nvme_iov_md": false 00:31:15.139 }, 00:31:15.139 "memory_domains": [ 00:31:15.139 { 00:31:15.139 "dma_device_id": "system", 00:31:15.139 "dma_device_type": 1 00:31:15.139 }, 00:31:15.139 { 00:31:15.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.139 "dma_device_type": 2 00:31:15.139 }, 00:31:15.139 { 00:31:15.139 "dma_device_id": "system", 00:31:15.139 "dma_device_type": 1 00:31:15.139 }, 00:31:15.139 { 00:31:15.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.139 "dma_device_type": 2 00:31:15.139 } 00:31:15.139 ], 00:31:15.139 "driver_specific": { 00:31:15.139 "raid": { 00:31:15.139 "uuid": "39e80cb1-d127-4809-80f2-e18691f7ebe4", 00:31:15.139 "strip_size_kb": 0, 00:31:15.139 "state": "online", 00:31:15.139 "raid_level": "raid1", 00:31:15.139 "superblock": true, 00:31:15.139 "num_base_bdevs": 2, 00:31:15.139 "num_base_bdevs_discovered": 2, 00:31:15.139 "num_base_bdevs_operational": 2, 00:31:15.139 "base_bdevs_list": [ 00:31:15.139 { 00:31:15.139 "name": "BaseBdev1", 00:31:15.139 "uuid": "51deab0f-4e61-4dc8-a06b-c2cc9d3f0f74", 00:31:15.139 "is_configured": true, 00:31:15.139 "data_offset": 256, 00:31:15.139 "data_size": 7936 00:31:15.139 }, 00:31:15.139 { 00:31:15.139 "name": "BaseBdev2", 00:31:15.139 "uuid": "8b875765-48e9-4329-822f-abe0eacecf87", 00:31:15.139 "is_configured": true, 00:31:15.139 "data_offset": 256, 00:31:15.139 "data_size": 7936 00:31:15.139 } 00:31:15.139 ] 00:31:15.139 } 00:31:15.139 } 00:31:15.139 }' 00:31:15.139 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:15.139 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:15.139 BaseBdev2' 00:31:15.139 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:15.139 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:15.139 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:15.398 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:15.398 "name": "BaseBdev1", 00:31:15.398 "aliases": [ 00:31:15.398 "51deab0f-4e61-4dc8-a06b-c2cc9d3f0f74" 00:31:15.398 ], 00:31:15.398 "product_name": "Malloc disk", 00:31:15.398 "block_size": 4096, 00:31:15.398 "num_blocks": 8192, 00:31:15.398 "uuid": "51deab0f-4e61-4dc8-a06b-c2cc9d3f0f74", 00:31:15.398 "md_size": 32, 00:31:15.398 "md_interleave": false, 00:31:15.398 "dif_type": 0, 00:31:15.398 "assigned_rate_limits": { 00:31:15.398 "rw_ios_per_sec": 0, 00:31:15.398 "rw_mbytes_per_sec": 0, 00:31:15.398 "r_mbytes_per_sec": 0, 00:31:15.398 "w_mbytes_per_sec": 0 00:31:15.398 }, 00:31:15.398 "claimed": true, 00:31:15.398 "claim_type": "exclusive_write", 00:31:15.398 "zoned": false, 00:31:15.398 "supported_io_types": { 00:31:15.398 "read": true, 00:31:15.398 "write": true, 00:31:15.398 "unmap": true, 00:31:15.398 "flush": true, 00:31:15.398 "reset": true, 00:31:15.398 "nvme_admin": false, 00:31:15.398 "nvme_io": false, 00:31:15.398 "nvme_io_md": false, 00:31:15.398 "write_zeroes": true, 00:31:15.398 "zcopy": true, 00:31:15.398 "get_zone_info": false, 00:31:15.398 "zone_management": false, 00:31:15.398 "zone_append": false, 00:31:15.398 "compare": false, 00:31:15.398 "compare_and_write": false, 00:31:15.398 "abort": true, 00:31:15.398 "seek_hole": false, 00:31:15.398 "seek_data": false, 00:31:15.398 "copy": true, 00:31:15.398 "nvme_iov_md": false 00:31:15.398 }, 00:31:15.398 "memory_domains": [ 00:31:15.398 { 00:31:15.398 "dma_device_id": "system", 00:31:15.398 "dma_device_type": 1 00:31:15.398 }, 00:31:15.398 { 00:31:15.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.398 "dma_device_type": 2 00:31:15.398 } 00:31:15.398 ], 00:31:15.398 "driver_specific": {} 00:31:15.398 }' 00:31:15.398 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.398 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.398 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:15.398 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.398 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.398 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:15.398 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.657 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.657 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:15.657 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.657 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.657 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:15.657 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:15.657 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:15.657 15:25:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:15.933 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:15.933 "name": "BaseBdev2", 00:31:15.933 "aliases": [ 00:31:15.933 "8b875765-48e9-4329-822f-abe0eacecf87" 00:31:15.933 ], 00:31:15.933 "product_name": "Malloc disk", 00:31:15.933 "block_size": 4096, 00:31:15.933 "num_blocks": 8192, 00:31:15.933 "uuid": "8b875765-48e9-4329-822f-abe0eacecf87", 00:31:15.933 "md_size": 32, 00:31:15.933 "md_interleave": false, 00:31:15.933 "dif_type": 0, 00:31:15.933 "assigned_rate_limits": { 00:31:15.933 "rw_ios_per_sec": 0, 00:31:15.933 "rw_mbytes_per_sec": 0, 00:31:15.933 "r_mbytes_per_sec": 0, 00:31:15.933 "w_mbytes_per_sec": 0 00:31:15.933 }, 00:31:15.933 "claimed": true, 00:31:15.933 "claim_type": "exclusive_write", 00:31:15.933 "zoned": false, 00:31:15.933 "supported_io_types": { 00:31:15.933 "read": true, 00:31:15.933 "write": true, 00:31:15.933 "unmap": true, 00:31:15.933 "flush": true, 00:31:15.933 "reset": true, 00:31:15.933 "nvme_admin": false, 00:31:15.933 "nvme_io": false, 00:31:15.933 "nvme_io_md": false, 00:31:15.933 "write_zeroes": true, 00:31:15.933 "zcopy": true, 00:31:15.933 "get_zone_info": false, 00:31:15.933 "zone_management": false, 00:31:15.933 "zone_append": false, 00:31:15.933 "compare": false, 00:31:15.933 "compare_and_write": false, 00:31:15.933 "abort": true, 00:31:15.933 "seek_hole": false, 00:31:15.933 "seek_data": false, 00:31:15.933 "copy": true, 00:31:15.933 "nvme_iov_md": false 00:31:15.933 }, 00:31:15.933 "memory_domains": [ 00:31:15.933 { 00:31:15.933 "dma_device_id": "system", 00:31:15.933 "dma_device_type": 1 00:31:15.933 }, 00:31:15.933 { 00:31:15.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.933 "dma_device_type": 2 00:31:15.933 } 00:31:15.933 ], 00:31:15.933 "driver_specific": {} 00:31:15.933 }' 00:31:15.933 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.933 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.933 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:15.933 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.933 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.933 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:15.934 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.934 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.934 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:15.934 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.934 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.934 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:15.934 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:16.200 [2024-07-23 15:25:11.402654] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.200 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:16.459 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.459 "name": "Existed_Raid", 00:31:16.459 "uuid": "39e80cb1-d127-4809-80f2-e18691f7ebe4", 00:31:16.459 "strip_size_kb": 0, 00:31:16.459 "state": "online", 00:31:16.459 "raid_level": "raid1", 00:31:16.459 "superblock": true, 00:31:16.459 "num_base_bdevs": 2, 00:31:16.459 "num_base_bdevs_discovered": 1, 00:31:16.459 "num_base_bdevs_operational": 1, 00:31:16.459 "base_bdevs_list": [ 00:31:16.459 { 00:31:16.459 "name": null, 00:31:16.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.459 "is_configured": false, 00:31:16.459 "data_offset": 256, 00:31:16.459 "data_size": 7936 00:31:16.459 }, 00:31:16.459 { 00:31:16.459 "name": "BaseBdev2", 00:31:16.459 "uuid": "8b875765-48e9-4329-822f-abe0eacecf87", 00:31:16.459 "is_configured": true, 00:31:16.459 "data_offset": 256, 00:31:16.459 "data_size": 7936 00:31:16.459 } 00:31:16.459 ] 00:31:16.459 }' 00:31:16.459 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.459 15:25:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:16.718 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:16.718 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:16.718 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.718 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:16.976 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:16.976 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:16.976 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:17.235 [2024-07-23 15:25:12.520537] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:17.235 [2024-07-23 15:25:12.520662] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:17.235 [2024-07-23 15:25:12.534279] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:17.235 [2024-07-23 15:25:12.534336] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:17.235 [2024-07-23 15:25:12.534351] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:31:17.235 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:17.235 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:17.235 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.235 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 121815 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 121815 ']' 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 121815 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121815 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:17.494 killing process with pid 121815 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121815' 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 121815 00:31:17.494 [2024-07-23 15:25:12.857046] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:17.494 15:25:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 121815 00:31:17.494 [2024-07-23 15:25:12.857125] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:17.753 15:25:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:31:17.753 00:31:17.753 real 0m7.951s 00:31:17.753 user 0m13.324s 00:31:17.753 sys 0m1.728s 00:31:17.753 15:25:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:17.753 ************************************ 00:31:17.753 END TEST raid_state_function_test_sb_md_separate 00:31:17.753 ************************************ 00:31:17.753 15:25:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:17.753 15:25:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:17.753 15:25:13 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:31:17.753 15:25:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:31:17.753 15:25:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.753 15:25:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:17.753 ************************************ 00:31:17.753 START TEST raid_superblock_test_md_separate 00:31:17.753 ************************************ 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=122135 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 122135 /var/tmp/spdk-raid.sock 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 122135 ']' 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:17.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:17.753 15:25:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:18.012 [2024-07-23 15:25:13.220620] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:31:18.012 [2024-07-23 15:25:13.221005] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122135 ] 00:31:18.012 [2024-07-23 15:25:13.359652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.012 [2024-07-23 15:25:13.408727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.270 [2024-07-23 15:25:13.454469] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:18.836 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:31:19.094 malloc1 00:31:19.094 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:19.352 [2024-07-23 15:25:14.563020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:19.352 [2024-07-23 15:25:14.563264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.352 [2024-07-23 15:25:14.563332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:31:19.352 [2024-07-23 15:25:14.563423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.352 [2024-07-23 15:25:14.565914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.352 [2024-07-23 15:25:14.566066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:19.352 pt1 00:31:19.352 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:19.352 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:19.352 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:31:19.353 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:31:19.353 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:19.353 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:19.353 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:19.353 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:19.353 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:31:19.353 malloc2 00:31:19.353 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:19.611 [2024-07-23 15:25:14.939524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:19.611 [2024-07-23 15:25:14.939754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.611 [2024-07-23 15:25:14.939832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:31:19.611 [2024-07-23 15:25:14.939922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.611 [2024-07-23 15:25:14.942210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.611 [2024-07-23 15:25:14.942370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:19.611 pt2 00:31:19.611 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:19.611 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:19.611 15:25:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:31:19.869 [2024-07-23 15:25:15.171669] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:19.869 [2024-07-23 15:25:15.174124] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:19.869 [2024-07-23 15:25:15.174319] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006c80 00:31:19.869 [2024-07-23 15:25:15.174341] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:19.869 [2024-07-23 15:25:15.174439] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:31:19.869 [2024-07-23 15:25:15.174551] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006c80 00:31:19.869 [2024-07-23 15:25:15.174562] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000006c80 00:31:19.869 [2024-07-23 15:25:15.174640] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.869 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:20.127 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:20.127 "name": "raid_bdev1", 00:31:20.127 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:20.127 "strip_size_kb": 0, 00:31:20.127 "state": "online", 00:31:20.127 "raid_level": "raid1", 00:31:20.127 "superblock": true, 00:31:20.127 "num_base_bdevs": 2, 00:31:20.127 "num_base_bdevs_discovered": 2, 00:31:20.127 "num_base_bdevs_operational": 2, 00:31:20.127 "base_bdevs_list": [ 00:31:20.127 { 00:31:20.127 "name": "pt1", 00:31:20.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:20.127 "is_configured": true, 00:31:20.127 "data_offset": 256, 00:31:20.127 "data_size": 7936 00:31:20.127 }, 00:31:20.127 { 00:31:20.127 "name": "pt2", 00:31:20.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:20.127 "is_configured": true, 00:31:20.127 "data_offset": 256, 00:31:20.127 "data_size": 7936 00:31:20.127 } 00:31:20.127 ] 00:31:20.128 }' 00:31:20.128 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:20.128 15:25:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:20.386 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:31:20.386 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:20.386 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:20.386 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:20.386 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:20.386 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:31:20.386 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:20.386 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:20.645 [2024-07-23 15:25:15.860059] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:20.645 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:20.645 "name": "raid_bdev1", 00:31:20.645 "aliases": [ 00:31:20.645 "6552c9cf-0c1a-4ca4-a55c-ef311a67445b" 00:31:20.645 ], 00:31:20.645 "product_name": "Raid Volume", 00:31:20.645 "block_size": 4096, 00:31:20.645 "num_blocks": 7936, 00:31:20.645 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:20.645 "md_size": 32, 00:31:20.645 "md_interleave": false, 00:31:20.645 "dif_type": 0, 00:31:20.645 "assigned_rate_limits": { 00:31:20.645 "rw_ios_per_sec": 0, 00:31:20.645 "rw_mbytes_per_sec": 0, 00:31:20.645 "r_mbytes_per_sec": 0, 00:31:20.645 "w_mbytes_per_sec": 0 00:31:20.645 }, 00:31:20.645 "claimed": false, 00:31:20.645 "zoned": false, 00:31:20.645 "supported_io_types": { 00:31:20.645 "read": true, 00:31:20.645 "write": true, 00:31:20.645 "unmap": false, 00:31:20.645 "flush": false, 00:31:20.645 "reset": true, 00:31:20.645 "nvme_admin": false, 00:31:20.645 "nvme_io": false, 00:31:20.645 "nvme_io_md": false, 00:31:20.645 "write_zeroes": true, 00:31:20.645 "zcopy": false, 00:31:20.645 "get_zone_info": false, 00:31:20.645 "zone_management": false, 00:31:20.645 "zone_append": false, 00:31:20.645 "compare": false, 00:31:20.645 "compare_and_write": false, 00:31:20.645 "abort": false, 00:31:20.645 "seek_hole": false, 00:31:20.645 "seek_data": false, 00:31:20.645 "copy": false, 00:31:20.645 "nvme_iov_md": false 00:31:20.645 }, 00:31:20.645 "memory_domains": [ 00:31:20.645 { 00:31:20.645 "dma_device_id": "system", 00:31:20.645 "dma_device_type": 1 00:31:20.645 }, 00:31:20.645 { 00:31:20.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.645 "dma_device_type": 2 00:31:20.645 }, 00:31:20.645 { 00:31:20.645 "dma_device_id": "system", 00:31:20.645 "dma_device_type": 1 00:31:20.645 }, 00:31:20.645 { 00:31:20.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.645 "dma_device_type": 2 00:31:20.645 } 00:31:20.645 ], 00:31:20.645 "driver_specific": { 00:31:20.645 "raid": { 00:31:20.645 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:20.645 "strip_size_kb": 0, 00:31:20.645 "state": "online", 00:31:20.645 "raid_level": "raid1", 00:31:20.645 "superblock": true, 00:31:20.645 "num_base_bdevs": 2, 00:31:20.645 "num_base_bdevs_discovered": 2, 00:31:20.645 "num_base_bdevs_operational": 2, 00:31:20.645 "base_bdevs_list": [ 00:31:20.646 { 00:31:20.646 "name": "pt1", 00:31:20.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:20.646 "is_configured": true, 00:31:20.646 "data_offset": 256, 00:31:20.646 "data_size": 7936 00:31:20.646 }, 00:31:20.646 { 00:31:20.646 "name": "pt2", 00:31:20.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:20.646 "is_configured": true, 00:31:20.646 "data_offset": 256, 00:31:20.646 "data_size": 7936 00:31:20.646 } 00:31:20.646 ] 00:31:20.646 } 00:31:20.646 } 00:31:20.646 }' 00:31:20.646 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:20.646 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:20.646 pt2' 00:31:20.646 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:20.646 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:20.646 15:25:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:20.905 "name": "pt1", 00:31:20.905 "aliases": [ 00:31:20.905 "00000000-0000-0000-0000-000000000001" 00:31:20.905 ], 00:31:20.905 "product_name": "passthru", 00:31:20.905 "block_size": 4096, 00:31:20.905 "num_blocks": 8192, 00:31:20.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:20.905 "md_size": 32, 00:31:20.905 "md_interleave": false, 00:31:20.905 "dif_type": 0, 00:31:20.905 "assigned_rate_limits": { 00:31:20.905 "rw_ios_per_sec": 0, 00:31:20.905 "rw_mbytes_per_sec": 0, 00:31:20.905 "r_mbytes_per_sec": 0, 00:31:20.905 "w_mbytes_per_sec": 0 00:31:20.905 }, 00:31:20.905 "claimed": true, 00:31:20.905 "claim_type": "exclusive_write", 00:31:20.905 "zoned": false, 00:31:20.905 "supported_io_types": { 00:31:20.905 "read": true, 00:31:20.905 "write": true, 00:31:20.905 "unmap": true, 00:31:20.905 "flush": true, 00:31:20.905 "reset": true, 00:31:20.905 "nvme_admin": false, 00:31:20.905 "nvme_io": false, 00:31:20.905 "nvme_io_md": false, 00:31:20.905 "write_zeroes": true, 00:31:20.905 "zcopy": true, 00:31:20.905 "get_zone_info": false, 00:31:20.905 "zone_management": false, 00:31:20.905 "zone_append": false, 00:31:20.905 "compare": false, 00:31:20.905 "compare_and_write": false, 00:31:20.905 "abort": true, 00:31:20.905 "seek_hole": false, 00:31:20.905 "seek_data": false, 00:31:20.905 "copy": true, 00:31:20.905 "nvme_iov_md": false 00:31:20.905 }, 00:31:20.905 "memory_domains": [ 00:31:20.905 { 00:31:20.905 "dma_device_id": "system", 00:31:20.905 "dma_device_type": 1 00:31:20.905 }, 00:31:20.905 { 00:31:20.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.905 "dma_device_type": 2 00:31:20.905 } 00:31:20.905 ], 00:31:20.905 "driver_specific": { 00:31:20.905 "passthru": { 00:31:20.905 "name": "pt1", 00:31:20.905 "base_bdev_name": "malloc1" 00:31:20.905 } 00:31:20.905 } 00:31:20.905 }' 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:20.905 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:21.164 "name": "pt2", 00:31:21.164 "aliases": [ 00:31:21.164 "00000000-0000-0000-0000-000000000002" 00:31:21.164 ], 00:31:21.164 "product_name": "passthru", 00:31:21.164 "block_size": 4096, 00:31:21.164 "num_blocks": 8192, 00:31:21.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:21.164 "md_size": 32, 00:31:21.164 "md_interleave": false, 00:31:21.164 "dif_type": 0, 00:31:21.164 "assigned_rate_limits": { 00:31:21.164 "rw_ios_per_sec": 0, 00:31:21.164 "rw_mbytes_per_sec": 0, 00:31:21.164 "r_mbytes_per_sec": 0, 00:31:21.164 "w_mbytes_per_sec": 0 00:31:21.164 }, 00:31:21.164 "claimed": true, 00:31:21.164 "claim_type": "exclusive_write", 00:31:21.164 "zoned": false, 00:31:21.164 "supported_io_types": { 00:31:21.164 "read": true, 00:31:21.164 "write": true, 00:31:21.164 "unmap": true, 00:31:21.164 "flush": true, 00:31:21.164 "reset": true, 00:31:21.164 "nvme_admin": false, 00:31:21.164 "nvme_io": false, 00:31:21.164 "nvme_io_md": false, 00:31:21.164 "write_zeroes": true, 00:31:21.164 "zcopy": true, 00:31:21.164 "get_zone_info": false, 00:31:21.164 "zone_management": false, 00:31:21.164 "zone_append": false, 00:31:21.164 "compare": false, 00:31:21.164 "compare_and_write": false, 00:31:21.164 "abort": true, 00:31:21.164 "seek_hole": false, 00:31:21.164 "seek_data": false, 00:31:21.164 "copy": true, 00:31:21.164 "nvme_iov_md": false 00:31:21.164 }, 00:31:21.164 "memory_domains": [ 00:31:21.164 { 00:31:21.164 "dma_device_id": "system", 00:31:21.164 "dma_device_type": 1 00:31:21.164 }, 00:31:21.164 { 00:31:21.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:21.164 "dma_device_type": 2 00:31:21.164 } 00:31:21.164 ], 00:31:21.164 "driver_specific": { 00:31:21.164 "passthru": { 00:31:21.164 "name": "pt2", 00:31:21.164 "base_bdev_name": "malloc2" 00:31:21.164 } 00:31:21.164 } 00:31:21.164 }' 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:31:21.164 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:21.423 [2024-07-23 15:25:16.816251] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:21.423 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6552c9cf-0c1a-4ca4-a55c-ef311a67445b 00:31:21.423 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 6552c9cf-0c1a-4ca4-a55c-ef311a67445b ']' 00:31:21.423 15:25:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:21.682 [2024-07-23 15:25:17.080007] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:21.682 [2024-07-23 15:25:17.080203] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:21.682 [2024-07-23 15:25:17.080315] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:21.682 [2024-07-23 15:25:17.080391] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:21.682 [2024-07-23 15:25:17.080404] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006c80 name raid_bdev1, state offline 00:31:21.682 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.682 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:31:22.249 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:31:22.249 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:31:22.249 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:22.249 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:22.249 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:22.249 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:22.507 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:22.507 15:25:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:22.766 [2024-07-23 15:25:18.176266] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:22.766 [2024-07-23 15:25:18.178487] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:22.766 [2024-07-23 15:25:18.178561] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:22.766 [2024-07-23 15:25:18.178625] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:22.766 [2024-07-23 15:25:18.178659] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:22.766 [2024-07-23 15:25:18.178679] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state configuring 00:31:22.766 request: 00:31:22.766 { 00:31:22.766 "name": "raid_bdev1", 00:31:22.766 "raid_level": "raid1", 00:31:22.766 "base_bdevs": [ 00:31:22.766 "malloc1", 00:31:22.766 "malloc2" 00:31:22.766 ], 00:31:22.766 "superblock": false, 00:31:22.766 "method": "bdev_raid_create", 00:31:22.766 "req_id": 1 00:31:22.766 } 00:31:22.766 Got JSON-RPC error response 00:31:22.766 response: 00:31:22.766 { 00:31:22.766 "code": -17, 00:31:22.766 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:22.766 } 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:22.766 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:23.024 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.024 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:31:23.024 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:31:23.024 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:31:23.024 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:23.283 [2024-07-23 15:25:18.544312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:23.283 [2024-07-23 15:25:18.544540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:23.283 [2024-07-23 15:25:18.544577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:31:23.283 [2024-07-23 15:25:18.544591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:23.283 [2024-07-23 15:25:18.546988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:23.283 [2024-07-23 15:25:18.547029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:23.283 [2024-07-23 15:25:18.547114] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:23.283 [2024-07-23 15:25:18.547167] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:23.283 pt1 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.283 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.541 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:23.541 "name": "raid_bdev1", 00:31:23.541 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:23.541 "strip_size_kb": 0, 00:31:23.541 "state": "configuring", 00:31:23.541 "raid_level": "raid1", 00:31:23.541 "superblock": true, 00:31:23.541 "num_base_bdevs": 2, 00:31:23.541 "num_base_bdevs_discovered": 1, 00:31:23.541 "num_base_bdevs_operational": 2, 00:31:23.541 "base_bdevs_list": [ 00:31:23.541 { 00:31:23.541 "name": "pt1", 00:31:23.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:23.541 "is_configured": true, 00:31:23.541 "data_offset": 256, 00:31:23.541 "data_size": 7936 00:31:23.541 }, 00:31:23.541 { 00:31:23.541 "name": null, 00:31:23.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:23.541 "is_configured": false, 00:31:23.541 "data_offset": 256, 00:31:23.541 "data_size": 7936 00:31:23.541 } 00:31:23.541 ] 00:31:23.541 }' 00:31:23.541 15:25:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:23.541 15:25:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:23.828 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:31:23.828 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:31:23.828 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:23.828 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:23.828 [2024-07-23 15:25:19.164416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:23.828 [2024-07-23 15:25:19.164492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:23.828 [2024-07-23 15:25:19.164519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:31:23.828 [2024-07-23 15:25:19.164531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:23.828 [2024-07-23 15:25:19.164724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:23.828 [2024-07-23 15:25:19.164740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:23.828 [2024-07-23 15:25:19.164835] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:23.828 [2024-07-23 15:25:19.164859] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:23.828 [2024-07-23 15:25:19.164941] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:31:23.829 [2024-07-23 15:25:19.164950] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:23.829 [2024-07-23 15:25:19.165036] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:31:23.829 [2024-07-23 15:25:19.165120] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:31:23.829 [2024-07-23 15:25:19.165134] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:31:23.829 [2024-07-23 15:25:19.165193] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:23.829 pt2 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.829 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.096 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:24.096 "name": "raid_bdev1", 00:31:24.096 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:24.096 "strip_size_kb": 0, 00:31:24.096 "state": "online", 00:31:24.096 "raid_level": "raid1", 00:31:24.096 "superblock": true, 00:31:24.096 "num_base_bdevs": 2, 00:31:24.096 "num_base_bdevs_discovered": 2, 00:31:24.096 "num_base_bdevs_operational": 2, 00:31:24.096 "base_bdevs_list": [ 00:31:24.096 { 00:31:24.096 "name": "pt1", 00:31:24.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:24.096 "is_configured": true, 00:31:24.096 "data_offset": 256, 00:31:24.096 "data_size": 7936 00:31:24.096 }, 00:31:24.096 { 00:31:24.096 "name": "pt2", 00:31:24.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:24.096 "is_configured": true, 00:31:24.096 "data_offset": 256, 00:31:24.096 "data_size": 7936 00:31:24.096 } 00:31:24.096 ] 00:31:24.096 }' 00:31:24.096 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:24.096 15:25:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:24.355 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:31:24.355 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:24.355 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:24.355 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:24.355 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:24.355 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:31:24.355 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:24.355 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:24.613 [2024-07-23 15:25:19.972881] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:24.613 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:24.613 "name": "raid_bdev1", 00:31:24.613 "aliases": [ 00:31:24.613 "6552c9cf-0c1a-4ca4-a55c-ef311a67445b" 00:31:24.613 ], 00:31:24.613 "product_name": "Raid Volume", 00:31:24.613 "block_size": 4096, 00:31:24.613 "num_blocks": 7936, 00:31:24.613 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:24.613 "md_size": 32, 00:31:24.613 "md_interleave": false, 00:31:24.613 "dif_type": 0, 00:31:24.613 "assigned_rate_limits": { 00:31:24.613 "rw_ios_per_sec": 0, 00:31:24.613 "rw_mbytes_per_sec": 0, 00:31:24.613 "r_mbytes_per_sec": 0, 00:31:24.613 "w_mbytes_per_sec": 0 00:31:24.613 }, 00:31:24.613 "claimed": false, 00:31:24.613 "zoned": false, 00:31:24.613 "supported_io_types": { 00:31:24.613 "read": true, 00:31:24.613 "write": true, 00:31:24.613 "unmap": false, 00:31:24.613 "flush": false, 00:31:24.613 "reset": true, 00:31:24.613 "nvme_admin": false, 00:31:24.613 "nvme_io": false, 00:31:24.613 "nvme_io_md": false, 00:31:24.613 "write_zeroes": true, 00:31:24.613 "zcopy": false, 00:31:24.613 "get_zone_info": false, 00:31:24.613 "zone_management": false, 00:31:24.613 "zone_append": false, 00:31:24.613 "compare": false, 00:31:24.613 "compare_and_write": false, 00:31:24.613 "abort": false, 00:31:24.613 "seek_hole": false, 00:31:24.613 "seek_data": false, 00:31:24.613 "copy": false, 00:31:24.613 "nvme_iov_md": false 00:31:24.613 }, 00:31:24.613 "memory_domains": [ 00:31:24.613 { 00:31:24.613 "dma_device_id": "system", 00:31:24.613 "dma_device_type": 1 00:31:24.613 }, 00:31:24.613 { 00:31:24.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.613 "dma_device_type": 2 00:31:24.613 }, 00:31:24.613 { 00:31:24.613 "dma_device_id": "system", 00:31:24.613 "dma_device_type": 1 00:31:24.613 }, 00:31:24.613 { 00:31:24.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.613 "dma_device_type": 2 00:31:24.613 } 00:31:24.613 ], 00:31:24.613 "driver_specific": { 00:31:24.613 "raid": { 00:31:24.613 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:24.613 "strip_size_kb": 0, 00:31:24.613 "state": "online", 00:31:24.613 "raid_level": "raid1", 00:31:24.613 "superblock": true, 00:31:24.613 "num_base_bdevs": 2, 00:31:24.613 "num_base_bdevs_discovered": 2, 00:31:24.613 "num_base_bdevs_operational": 2, 00:31:24.614 "base_bdevs_list": [ 00:31:24.614 { 00:31:24.614 "name": "pt1", 00:31:24.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:24.614 "is_configured": true, 00:31:24.614 "data_offset": 256, 00:31:24.614 "data_size": 7936 00:31:24.614 }, 00:31:24.614 { 00:31:24.614 "name": "pt2", 00:31:24.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:24.614 "is_configured": true, 00:31:24.614 "data_offset": 256, 00:31:24.614 "data_size": 7936 00:31:24.614 } 00:31:24.614 ] 00:31:24.614 } 00:31:24.614 } 00:31:24.614 }' 00:31:24.614 15:25:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:24.614 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:24.614 pt2' 00:31:24.614 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:24.614 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:24.614 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:24.873 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:24.873 "name": "pt1", 00:31:24.873 "aliases": [ 00:31:24.873 "00000000-0000-0000-0000-000000000001" 00:31:24.873 ], 00:31:24.873 "product_name": "passthru", 00:31:24.873 "block_size": 4096, 00:31:24.873 "num_blocks": 8192, 00:31:24.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:24.873 "md_size": 32, 00:31:24.873 "md_interleave": false, 00:31:24.873 "dif_type": 0, 00:31:24.873 "assigned_rate_limits": { 00:31:24.873 "rw_ios_per_sec": 0, 00:31:24.873 "rw_mbytes_per_sec": 0, 00:31:24.873 "r_mbytes_per_sec": 0, 00:31:24.873 "w_mbytes_per_sec": 0 00:31:24.873 }, 00:31:24.873 "claimed": true, 00:31:24.873 "claim_type": "exclusive_write", 00:31:24.873 "zoned": false, 00:31:24.873 "supported_io_types": { 00:31:24.873 "read": true, 00:31:24.873 "write": true, 00:31:24.873 "unmap": true, 00:31:24.873 "flush": true, 00:31:24.873 "reset": true, 00:31:24.873 "nvme_admin": false, 00:31:24.873 "nvme_io": false, 00:31:24.873 "nvme_io_md": false, 00:31:24.873 "write_zeroes": true, 00:31:24.873 "zcopy": true, 00:31:24.873 "get_zone_info": false, 00:31:24.873 "zone_management": false, 00:31:24.873 "zone_append": false, 00:31:24.873 "compare": false, 00:31:24.873 "compare_and_write": false, 00:31:24.873 "abort": true, 00:31:24.873 "seek_hole": false, 00:31:24.873 "seek_data": false, 00:31:24.873 "copy": true, 00:31:24.873 "nvme_iov_md": false 00:31:24.873 }, 00:31:24.873 "memory_domains": [ 00:31:24.873 { 00:31:24.873 "dma_device_id": "system", 00:31:24.873 "dma_device_type": 1 00:31:24.873 }, 00:31:24.873 { 00:31:24.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.873 "dma_device_type": 2 00:31:24.873 } 00:31:24.873 ], 00:31:24.873 "driver_specific": { 00:31:24.873 "passthru": { 00:31:24.873 "name": "pt1", 00:31:24.873 "base_bdev_name": "malloc1" 00:31:24.873 } 00:31:24.873 } 00:31:24.873 }' 00:31:24.873 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:24.873 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:24.873 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:24.873 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:24.873 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:24.873 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:25.132 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:25.390 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:25.390 "name": "pt2", 00:31:25.390 "aliases": [ 00:31:25.390 "00000000-0000-0000-0000-000000000002" 00:31:25.390 ], 00:31:25.390 "product_name": "passthru", 00:31:25.390 "block_size": 4096, 00:31:25.390 "num_blocks": 8192, 00:31:25.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:25.390 "md_size": 32, 00:31:25.390 "md_interleave": false, 00:31:25.390 "dif_type": 0, 00:31:25.390 "assigned_rate_limits": { 00:31:25.390 "rw_ios_per_sec": 0, 00:31:25.391 "rw_mbytes_per_sec": 0, 00:31:25.391 "r_mbytes_per_sec": 0, 00:31:25.391 "w_mbytes_per_sec": 0 00:31:25.391 }, 00:31:25.391 "claimed": true, 00:31:25.391 "claim_type": "exclusive_write", 00:31:25.391 "zoned": false, 00:31:25.391 "supported_io_types": { 00:31:25.391 "read": true, 00:31:25.391 "write": true, 00:31:25.391 "unmap": true, 00:31:25.391 "flush": true, 00:31:25.391 "reset": true, 00:31:25.391 "nvme_admin": false, 00:31:25.391 "nvme_io": false, 00:31:25.391 "nvme_io_md": false, 00:31:25.391 "write_zeroes": true, 00:31:25.391 "zcopy": true, 00:31:25.391 "get_zone_info": false, 00:31:25.391 "zone_management": false, 00:31:25.391 "zone_append": false, 00:31:25.391 "compare": false, 00:31:25.391 "compare_and_write": false, 00:31:25.391 "abort": true, 00:31:25.391 "seek_hole": false, 00:31:25.391 "seek_data": false, 00:31:25.391 "copy": true, 00:31:25.391 "nvme_iov_md": false 00:31:25.391 }, 00:31:25.391 "memory_domains": [ 00:31:25.391 { 00:31:25.391 "dma_device_id": "system", 00:31:25.391 "dma_device_type": 1 00:31:25.391 }, 00:31:25.391 { 00:31:25.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.391 "dma_device_type": 2 00:31:25.391 } 00:31:25.391 ], 00:31:25.391 "driver_specific": { 00:31:25.391 "passthru": { 00:31:25.391 "name": "pt2", 00:31:25.391 "base_bdev_name": "malloc2" 00:31:25.391 } 00:31:25.391 } 00:31:25.391 }' 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:25.391 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:31:25.650 [2024-07-23 15:25:20.881057] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:25.650 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 6552c9cf-0c1a-4ca4-a55c-ef311a67445b '!=' 6552c9cf-0c1a-4ca4-a55c-ef311a67445b ']' 00:31:25.650 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:31:25.650 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:25.650 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:31:25.650 15:25:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:25.650 [2024-07-23 15:25:21.060922] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:25.650 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:25.651 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:25.910 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.910 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.910 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:25.910 "name": "raid_bdev1", 00:31:25.910 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:25.910 "strip_size_kb": 0, 00:31:25.910 "state": "online", 00:31:25.910 "raid_level": "raid1", 00:31:25.910 "superblock": true, 00:31:25.910 "num_base_bdevs": 2, 00:31:25.910 "num_base_bdevs_discovered": 1, 00:31:25.910 "num_base_bdevs_operational": 1, 00:31:25.910 "base_bdevs_list": [ 00:31:25.910 { 00:31:25.910 "name": null, 00:31:25.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.910 "is_configured": false, 00:31:25.910 "data_offset": 256, 00:31:25.910 "data_size": 7936 00:31:25.910 }, 00:31:25.910 { 00:31:25.910 "name": "pt2", 00:31:25.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:25.910 "is_configured": true, 00:31:25.910 "data_offset": 256, 00:31:25.910 "data_size": 7936 00:31:25.910 } 00:31:25.910 ] 00:31:25.910 }' 00:31:25.910 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:25.910 15:25:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:26.168 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:26.427 [2024-07-23 15:25:21.684982] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:26.427 [2024-07-23 15:25:21.685023] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:26.427 [2024-07-23 15:25:21.685098] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:26.427 [2024-07-23 15:25:21.685152] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:26.427 [2024-07-23 15:25:21.685164] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:31:26.427 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.427 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:31:26.685 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:31:26.685 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:31:26.685 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:31:26.685 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:26.685 15:25:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:26.944 [2024-07-23 15:25:22.289103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:26.944 [2024-07-23 15:25:22.289190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:26.944 [2024-07-23 15:25:22.289218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:31:26.944 [2024-07-23 15:25:22.289230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:26.944 [2024-07-23 15:25:22.291431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:26.944 [2024-07-23 15:25:22.291474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:26.944 [2024-07-23 15:25:22.291556] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:26.944 [2024-07-23 15:25:22.291596] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:26.944 [2024-07-23 15:25:22.291665] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:31:26.944 [2024-07-23 15:25:22.291674] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:26.944 [2024-07-23 15:25:22.291739] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:31:26.944 [2024-07-23 15:25:22.291848] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:31:26.944 [2024-07-23 15:25:22.291865] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:31:26.944 [2024-07-23 15:25:22.291925] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:26.944 pt2 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.944 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.203 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:27.203 "name": "raid_bdev1", 00:31:27.203 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:27.203 "strip_size_kb": 0, 00:31:27.203 "state": "online", 00:31:27.203 "raid_level": "raid1", 00:31:27.203 "superblock": true, 00:31:27.203 "num_base_bdevs": 2, 00:31:27.203 "num_base_bdevs_discovered": 1, 00:31:27.203 "num_base_bdevs_operational": 1, 00:31:27.203 "base_bdevs_list": [ 00:31:27.203 { 00:31:27.203 "name": null, 00:31:27.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.203 "is_configured": false, 00:31:27.203 "data_offset": 256, 00:31:27.203 "data_size": 7936 00:31:27.203 }, 00:31:27.203 { 00:31:27.203 "name": "pt2", 00:31:27.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:27.203 "is_configured": true, 00:31:27.203 "data_offset": 256, 00:31:27.203 "data_size": 7936 00:31:27.203 } 00:31:27.203 ] 00:31:27.203 }' 00:31:27.203 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:27.203 15:25:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:27.461 15:25:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:27.719 [2024-07-23 15:25:22.997236] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:27.719 [2024-07-23 15:25:22.997477] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:27.719 [2024-07-23 15:25:22.997651] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:27.719 [2024-07-23 15:25:22.997734] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:27.719 [2024-07-23 15:25:22.997877] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:31:27.719 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.719 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:31:27.977 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:31:27.977 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:31:27.977 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:31:27.977 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:28.234 [2024-07-23 15:25:23.441332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:28.234 [2024-07-23 15:25:23.441413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:28.234 [2024-07-23 15:25:23.441441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:31:28.234 [2024-07-23 15:25:23.441456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:28.234 [2024-07-23 15:25:23.443724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:28.234 [2024-07-23 15:25:23.443774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:28.234 [2024-07-23 15:25:23.443978] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:28.234 [2024-07-23 15:25:23.444028] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:28.234 [2024-07-23 15:25:23.444126] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:28.234 [2024-07-23 15:25:23.444146] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:28.234 [2024-07-23 15:25:23.444182] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state configuring 00:31:28.234 [2024-07-23 15:25:23.444239] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:28.234 [2024-07-23 15:25:23.444305] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:31:28.234 [2024-07-23 15:25:23.444319] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:28.234 [2024-07-23 15:25:23.444383] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:31:28.234 [2024-07-23 15:25:23.444463] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:31:28.234 [2024-07-23 15:25:23.444472] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:31:28.234 [2024-07-23 15:25:23.444542] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:28.234 pt1 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.234 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.234 "name": "raid_bdev1", 00:31:28.234 "uuid": "6552c9cf-0c1a-4ca4-a55c-ef311a67445b", 00:31:28.234 "strip_size_kb": 0, 00:31:28.234 "state": "online", 00:31:28.234 "raid_level": "raid1", 00:31:28.234 "superblock": true, 00:31:28.234 "num_base_bdevs": 2, 00:31:28.234 "num_base_bdevs_discovered": 1, 00:31:28.234 "num_base_bdevs_operational": 1, 00:31:28.235 "base_bdevs_list": [ 00:31:28.235 { 00:31:28.235 "name": null, 00:31:28.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.235 "is_configured": false, 00:31:28.235 "data_offset": 256, 00:31:28.235 "data_size": 7936 00:31:28.235 }, 00:31:28.235 { 00:31:28.235 "name": "pt2", 00:31:28.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:28.235 "is_configured": true, 00:31:28.235 "data_offset": 256, 00:31:28.235 "data_size": 7936 00:31:28.235 } 00:31:28.235 ] 00:31:28.235 }' 00:31:28.235 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.235 15:25:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:28.492 15:25:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:28.749 15:25:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:31:29.007 [2024-07-23 15:25:24.393724] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 6552c9cf-0c1a-4ca4-a55c-ef311a67445b '!=' 6552c9cf-0c1a-4ca4-a55c-ef311a67445b ']' 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 122135 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 122135 ']' 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 122135 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:29.007 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122135 00:31:29.266 killing process with pid 122135 00:31:29.266 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:29.266 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:29.266 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122135' 00:31:29.266 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 122135 00:31:29.266 [2024-07-23 15:25:24.454754] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:29.266 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 122135 00:31:29.266 [2024-07-23 15:25:24.454849] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:29.266 [2024-07-23 15:25:24.454911] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:29.266 [2024-07-23 15:25:24.454923] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:31:29.266 [2024-07-23 15:25:24.480827] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:29.524 15:25:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:31:29.524 00:31:29.524 real 0m11.561s 00:31:29.524 user 0m19.714s 00:31:29.524 sys 0m2.598s 00:31:29.524 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:29.524 ************************************ 00:31:29.524 END TEST raid_superblock_test_md_separate 00:31:29.524 ************************************ 00:31:29.524 15:25:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:29.524 15:25:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:29.524 15:25:24 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:31:29.524 15:25:24 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:31:29.524 15:25:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:29.524 15:25:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:29.524 15:25:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:29.524 ************************************ 00:31:29.524 START TEST raid_rebuild_test_sb_md_separate 00:31:29.524 ************************************ 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=122589 00:31:29.524 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 122589 /var/tmp/spdk-raid.sock 00:31:29.525 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:29.525 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 122589 ']' 00:31:29.525 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:29.525 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:29.525 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:29.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:29.525 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:29.525 15:25:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:29.525 [2024-07-23 15:25:24.851603] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:31:29.525 [2024-07-23 15:25:24.851930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122589 ] 00:31:29.525 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:29.525 Zero copy mechanism will not be used. 00:31:29.783 [2024-07-23 15:25:24.993429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.783 [2024-07-23 15:25:25.040379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.783 [2024-07-23 15:25:25.085942] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:30.349 15:25:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:30.349 15:25:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:31:30.349 15:25:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:30.349 15:25:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:31:30.607 BaseBdev1_malloc 00:31:30.607 15:25:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:30.866 [2024-07-23 15:25:26.158375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:30.866 [2024-07-23 15:25:26.158478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:30.866 [2024-07-23 15:25:26.158530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:31:30.866 [2024-07-23 15:25:26.158544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:30.866 [2024-07-23 15:25:26.160884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:30.866 [2024-07-23 15:25:26.160926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:30.866 BaseBdev1 00:31:30.866 15:25:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:30.866 15:25:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:31:31.123 BaseBdev2_malloc 00:31:31.123 15:25:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:31.123 [2024-07-23 15:25:26.527216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:31.123 [2024-07-23 15:25:26.527290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.123 [2024-07-23 15:25:26.527336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:31:31.123 [2024-07-23 15:25:26.527348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.123 [2024-07-23 15:25:26.529823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.123 [2024-07-23 15:25:26.529991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:31.123 BaseBdev2 00:31:31.123 15:25:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:31:31.381 spare_malloc 00:31:31.381 15:25:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:31.639 spare_delay 00:31:31.639 15:25:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:31.639 [2024-07-23 15:25:27.061537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:31.639 [2024-07-23 15:25:27.061623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.639 [2024-07-23 15:25:27.061656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:31:31.639 [2024-07-23 15:25:27.061668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.639 [2024-07-23 15:25:27.064022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.639 [2024-07-23 15:25:27.064175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:31.639 spare 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:31.898 [2024-07-23 15:25:27.237687] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:31.898 [2024-07-23 15:25:27.240226] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:31.898 [2024-07-23 15:25:27.240583] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:31:31.898 [2024-07-23 15:25:27.240690] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:31.898 [2024-07-23 15:25:27.240892] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:31:31.898 [2024-07-23 15:25:27.241146] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:31:31.898 [2024-07-23 15:25:27.241248] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:31:31.898 [2024-07-23 15:25:27.241436] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.898 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.156 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:32.157 "name": "raid_bdev1", 00:31:32.157 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:32.157 "strip_size_kb": 0, 00:31:32.157 "state": "online", 00:31:32.157 "raid_level": "raid1", 00:31:32.157 "superblock": true, 00:31:32.157 "num_base_bdevs": 2, 00:31:32.157 "num_base_bdevs_discovered": 2, 00:31:32.157 "num_base_bdevs_operational": 2, 00:31:32.157 "base_bdevs_list": [ 00:31:32.157 { 00:31:32.157 "name": "BaseBdev1", 00:31:32.157 "uuid": "34032744-ac48-58c9-bf59-bd46f3bee5a7", 00:31:32.157 "is_configured": true, 00:31:32.157 "data_offset": 256, 00:31:32.157 "data_size": 7936 00:31:32.157 }, 00:31:32.157 { 00:31:32.157 "name": "BaseBdev2", 00:31:32.157 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:32.157 "is_configured": true, 00:31:32.157 "data_offset": 256, 00:31:32.157 "data_size": 7936 00:31:32.157 } 00:31:32.157 ] 00:31:32.157 }' 00:31:32.157 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:32.157 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:32.414 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:32.414 15:25:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:32.673 [2024-07-23 15:25:28.006051] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:32.673 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:31:32.673 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.673 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:32.930 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:32.931 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:31:32.931 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:32.931 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:32.931 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:33.189 [2024-07-23 15:25:28.373919] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:31:33.189 /dev/nbd0 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:33.189 1+0 records in 00:31:33.189 1+0 records out 00:31:33.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028244 s, 14.5 MB/s 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:31:33.189 15:25:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:31:33.755 7936+0 records in 00:31:33.756 7936+0 records out 00:31:33.756 32505856 bytes (33 MB, 31 MiB) copied, 0.681399 s, 47.7 MB/s 00:31:33.756 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:33.756 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:33.756 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:33.756 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:33.756 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:31:33.756 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:33.756 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:34.014 [2024-07-23 15:25:29.299700] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:31:34.014 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:34.273 [2024-07-23 15:25:29.459861] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.273 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.531 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:34.531 "name": "raid_bdev1", 00:31:34.531 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:34.531 "strip_size_kb": 0, 00:31:34.531 "state": "online", 00:31:34.531 "raid_level": "raid1", 00:31:34.531 "superblock": true, 00:31:34.531 "num_base_bdevs": 2, 00:31:34.531 "num_base_bdevs_discovered": 1, 00:31:34.531 "num_base_bdevs_operational": 1, 00:31:34.531 "base_bdevs_list": [ 00:31:34.531 { 00:31:34.531 "name": null, 00:31:34.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.531 "is_configured": false, 00:31:34.531 "data_offset": 256, 00:31:34.531 "data_size": 7936 00:31:34.531 }, 00:31:34.531 { 00:31:34.531 "name": "BaseBdev2", 00:31:34.531 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:34.531 "is_configured": true, 00:31:34.531 "data_offset": 256, 00:31:34.531 "data_size": 7936 00:31:34.531 } 00:31:34.531 ] 00:31:34.531 }' 00:31:34.531 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:34.531 15:25:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:34.790 15:25:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:35.048 [2024-07-23 15:25:30.232047] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:35.048 [2024-07-23 15:25:30.233985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00019c550 00:31:35.048 [2024-07-23 15:25:30.236289] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:35.048 15:25:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:35.984 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:35.984 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:35.984 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:35.984 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:35.984 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:35.984 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.984 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.243 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:36.243 "name": "raid_bdev1", 00:31:36.243 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:36.243 "strip_size_kb": 0, 00:31:36.243 "state": "online", 00:31:36.243 "raid_level": "raid1", 00:31:36.243 "superblock": true, 00:31:36.243 "num_base_bdevs": 2, 00:31:36.243 "num_base_bdevs_discovered": 2, 00:31:36.243 "num_base_bdevs_operational": 2, 00:31:36.243 "process": { 00:31:36.243 "type": "rebuild", 00:31:36.243 "target": "spare", 00:31:36.243 "progress": { 00:31:36.243 "blocks": 3072, 00:31:36.243 "percent": 38 00:31:36.243 } 00:31:36.243 }, 00:31:36.243 "base_bdevs_list": [ 00:31:36.243 { 00:31:36.243 "name": "spare", 00:31:36.243 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:36.243 "is_configured": true, 00:31:36.243 "data_offset": 256, 00:31:36.243 "data_size": 7936 00:31:36.243 }, 00:31:36.243 { 00:31:36.243 "name": "BaseBdev2", 00:31:36.243 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:36.243 "is_configured": true, 00:31:36.243 "data_offset": 256, 00:31:36.243 "data_size": 7936 00:31:36.243 } 00:31:36.243 ] 00:31:36.243 }' 00:31:36.243 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.243 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:36.243 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.243 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:36.243 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:36.502 [2024-07-23 15:25:31.737933] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:36.502 [2024-07-23 15:25:31.746535] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:36.502 [2024-07-23 15:25:31.746599] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.502 [2024-07-23 15:25:31.746618] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:36.502 [2024-07-23 15:25:31.746628] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.502 15:25:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.761 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:36.761 "name": "raid_bdev1", 00:31:36.761 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:36.761 "strip_size_kb": 0, 00:31:36.761 "state": "online", 00:31:36.761 "raid_level": "raid1", 00:31:36.761 "superblock": true, 00:31:36.761 "num_base_bdevs": 2, 00:31:36.761 "num_base_bdevs_discovered": 1, 00:31:36.761 "num_base_bdevs_operational": 1, 00:31:36.761 "base_bdevs_list": [ 00:31:36.761 { 00:31:36.761 "name": null, 00:31:36.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.761 "is_configured": false, 00:31:36.761 "data_offset": 256, 00:31:36.761 "data_size": 7936 00:31:36.761 }, 00:31:36.761 { 00:31:36.761 "name": "BaseBdev2", 00:31:36.761 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:36.761 "is_configured": true, 00:31:36.761 "data_offset": 256, 00:31:36.761 "data_size": 7936 00:31:36.761 } 00:31:36.761 ] 00:31:36.761 }' 00:31:36.761 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:36.761 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:37.020 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:37.020 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:37.020 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:37.020 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:37.020 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:37.020 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.020 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.279 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:37.279 "name": "raid_bdev1", 00:31:37.279 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:37.279 "strip_size_kb": 0, 00:31:37.279 "state": "online", 00:31:37.279 "raid_level": "raid1", 00:31:37.279 "superblock": true, 00:31:37.279 "num_base_bdevs": 2, 00:31:37.279 "num_base_bdevs_discovered": 1, 00:31:37.279 "num_base_bdevs_operational": 1, 00:31:37.279 "base_bdevs_list": [ 00:31:37.279 { 00:31:37.279 "name": null, 00:31:37.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.279 "is_configured": false, 00:31:37.279 "data_offset": 256, 00:31:37.279 "data_size": 7936 00:31:37.279 }, 00:31:37.279 { 00:31:37.279 "name": "BaseBdev2", 00:31:37.279 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:37.279 "is_configured": true, 00:31:37.279 "data_offset": 256, 00:31:37.279 "data_size": 7936 00:31:37.279 } 00:31:37.279 ] 00:31:37.279 }' 00:31:37.279 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:37.279 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:37.279 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:37.279 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:37.279 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:37.537 [2024-07-23 15:25:32.842455] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:37.537 [2024-07-23 15:25:32.844379] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00019c620 00:31:37.537 [2024-07-23 15:25:32.846504] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:37.537 15:25:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:38.473 15:25:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:38.473 15:25:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:38.473 15:25:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:38.473 15:25:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:38.473 15:25:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:38.473 15:25:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.473 15:25:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:38.731 "name": "raid_bdev1", 00:31:38.731 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:38.731 "strip_size_kb": 0, 00:31:38.731 "state": "online", 00:31:38.731 "raid_level": "raid1", 00:31:38.731 "superblock": true, 00:31:38.731 "num_base_bdevs": 2, 00:31:38.731 "num_base_bdevs_discovered": 2, 00:31:38.731 "num_base_bdevs_operational": 2, 00:31:38.731 "process": { 00:31:38.731 "type": "rebuild", 00:31:38.731 "target": "spare", 00:31:38.731 "progress": { 00:31:38.731 "blocks": 3072, 00:31:38.731 "percent": 38 00:31:38.731 } 00:31:38.731 }, 00:31:38.731 "base_bdevs_list": [ 00:31:38.731 { 00:31:38.731 "name": "spare", 00:31:38.731 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:38.731 "is_configured": true, 00:31:38.731 "data_offset": 256, 00:31:38.731 "data_size": 7936 00:31:38.731 }, 00:31:38.731 { 00:31:38.731 "name": "BaseBdev2", 00:31:38.731 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:38.731 "is_configured": true, 00:31:38.731 "data_offset": 256, 00:31:38.731 "data_size": 7936 00:31:38.731 } 00:31:38.731 ] 00:31:38.731 }' 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:38.731 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1054 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.731 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:38.988 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:38.988 "name": "raid_bdev1", 00:31:38.988 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:38.988 "strip_size_kb": 0, 00:31:38.988 "state": "online", 00:31:38.988 "raid_level": "raid1", 00:31:38.988 "superblock": true, 00:31:38.988 "num_base_bdevs": 2, 00:31:38.988 "num_base_bdevs_discovered": 2, 00:31:38.988 "num_base_bdevs_operational": 2, 00:31:38.988 "process": { 00:31:38.988 "type": "rebuild", 00:31:38.988 "target": "spare", 00:31:38.988 "progress": { 00:31:38.988 "blocks": 3840, 00:31:38.989 "percent": 48 00:31:38.989 } 00:31:38.989 }, 00:31:38.989 "base_bdevs_list": [ 00:31:38.989 { 00:31:38.989 "name": "spare", 00:31:38.989 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:38.989 "is_configured": true, 00:31:38.989 "data_offset": 256, 00:31:38.989 "data_size": 7936 00:31:38.989 }, 00:31:38.989 { 00:31:38.989 "name": "BaseBdev2", 00:31:38.989 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:38.989 "is_configured": true, 00:31:38.989 "data_offset": 256, 00:31:38.989 "data_size": 7936 00:31:38.989 } 00:31:38.989 ] 00:31:38.989 }' 00:31:38.989 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:39.247 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:39.247 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:39.247 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:39.247 15:25:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:40.185 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:40.185 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:40.185 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:40.185 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:40.185 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:40.185 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:40.185 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.185 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.445 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:40.445 "name": "raid_bdev1", 00:31:40.445 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:40.445 "strip_size_kb": 0, 00:31:40.445 "state": "online", 00:31:40.445 "raid_level": "raid1", 00:31:40.445 "superblock": true, 00:31:40.445 "num_base_bdevs": 2, 00:31:40.445 "num_base_bdevs_discovered": 2, 00:31:40.445 "num_base_bdevs_operational": 2, 00:31:40.445 "process": { 00:31:40.445 "type": "rebuild", 00:31:40.445 "target": "spare", 00:31:40.445 "progress": { 00:31:40.445 "blocks": 7168, 00:31:40.445 "percent": 90 00:31:40.445 } 00:31:40.445 }, 00:31:40.445 "base_bdevs_list": [ 00:31:40.445 { 00:31:40.445 "name": "spare", 00:31:40.445 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:40.445 "is_configured": true, 00:31:40.445 "data_offset": 256, 00:31:40.445 "data_size": 7936 00:31:40.445 }, 00:31:40.445 { 00:31:40.445 "name": "BaseBdev2", 00:31:40.445 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:40.445 "is_configured": true, 00:31:40.445 "data_offset": 256, 00:31:40.445 "data_size": 7936 00:31:40.445 } 00:31:40.445 ] 00:31:40.445 }' 00:31:40.445 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:40.445 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:40.445 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:40.445 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:40.445 15:25:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:40.704 [2024-07-23 15:25:35.964914] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:40.704 [2024-07-23 15:25:35.965024] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:40.704 [2024-07-23 15:25:35.965134] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:41.642 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:41.642 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:41.642 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:41.642 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:41.642 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:41.642 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:41.642 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.642 15:25:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.642 "name": "raid_bdev1", 00:31:41.642 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:41.642 "strip_size_kb": 0, 00:31:41.642 "state": "online", 00:31:41.642 "raid_level": "raid1", 00:31:41.642 "superblock": true, 00:31:41.642 "num_base_bdevs": 2, 00:31:41.642 "num_base_bdevs_discovered": 2, 00:31:41.642 "num_base_bdevs_operational": 2, 00:31:41.642 "base_bdevs_list": [ 00:31:41.642 { 00:31:41.642 "name": "spare", 00:31:41.642 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:41.642 "is_configured": true, 00:31:41.642 "data_offset": 256, 00:31:41.642 "data_size": 7936 00:31:41.642 }, 00:31:41.642 { 00:31:41.642 "name": "BaseBdev2", 00:31:41.642 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:41.642 "is_configured": true, 00:31:41.642 "data_offset": 256, 00:31:41.642 "data_size": 7936 00:31:41.642 } 00:31:41.642 ] 00:31:41.642 }' 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.642 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.902 "name": "raid_bdev1", 00:31:41.902 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:41.902 "strip_size_kb": 0, 00:31:41.902 "state": "online", 00:31:41.902 "raid_level": "raid1", 00:31:41.902 "superblock": true, 00:31:41.902 "num_base_bdevs": 2, 00:31:41.902 "num_base_bdevs_discovered": 2, 00:31:41.902 "num_base_bdevs_operational": 2, 00:31:41.902 "base_bdevs_list": [ 00:31:41.902 { 00:31:41.902 "name": "spare", 00:31:41.902 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:41.902 "is_configured": true, 00:31:41.902 "data_offset": 256, 00:31:41.902 "data_size": 7936 00:31:41.902 }, 00:31:41.902 { 00:31:41.902 "name": "BaseBdev2", 00:31:41.902 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:41.902 "is_configured": true, 00:31:41.902 "data_offset": 256, 00:31:41.902 "data_size": 7936 00:31:41.902 } 00:31:41.902 ] 00:31:41.902 }' 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.902 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.161 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:42.161 "name": "raid_bdev1", 00:31:42.161 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:42.161 "strip_size_kb": 0, 00:31:42.161 "state": "online", 00:31:42.161 "raid_level": "raid1", 00:31:42.161 "superblock": true, 00:31:42.161 "num_base_bdevs": 2, 00:31:42.161 "num_base_bdevs_discovered": 2, 00:31:42.161 "num_base_bdevs_operational": 2, 00:31:42.161 "base_bdevs_list": [ 00:31:42.161 { 00:31:42.161 "name": "spare", 00:31:42.161 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:42.161 "is_configured": true, 00:31:42.161 "data_offset": 256, 00:31:42.161 "data_size": 7936 00:31:42.161 }, 00:31:42.161 { 00:31:42.161 "name": "BaseBdev2", 00:31:42.161 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:42.161 "is_configured": true, 00:31:42.161 "data_offset": 256, 00:31:42.161 "data_size": 7936 00:31:42.161 } 00:31:42.161 ] 00:31:42.161 }' 00:31:42.161 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:42.161 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:42.420 15:25:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:42.681 [2024-07-23 15:25:38.000231] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:42.681 [2024-07-23 15:25:38.000279] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:42.681 [2024-07-23 15:25:38.000401] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:42.681 [2024-07-23 15:25:38.000480] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:42.681 [2024-07-23 15:25:38.000496] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:31:42.681 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:31:42.681 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:42.939 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:43.198 /dev/nbd0 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:43.198 1+0 records in 00:31:43.198 1+0 records out 00:31:43.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206577 s, 19.8 MB/s 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:43.198 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:43.457 /dev/nbd1 00:31:43.457 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:43.458 1+0 records in 00:31:43.458 1+0 records out 00:31:43.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285962 s, 14.3 MB/s 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:43.458 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:43.716 15:25:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:31:43.974 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:44.233 [2024-07-23 15:25:39.608580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:44.233 [2024-07-23 15:25:39.608834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.233 [2024-07-23 15:25:39.608874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:31:44.233 [2024-07-23 15:25:39.608890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.233 [2024-07-23 15:25:39.611304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.233 [2024-07-23 15:25:39.611462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:44.233 [2024-07-23 15:25:39.611561] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:44.233 [2024-07-23 15:25:39.611601] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:44.233 [2024-07-23 15:25:39.611723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:44.233 spare 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.233 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.491 [2024-07-23 15:25:39.711858] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:31:44.491 [2024-07-23 15:25:39.711910] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:44.491 [2024-07-23 15:25:39.712100] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001bada0 00:31:44.491 [2024-07-23 15:25:39.712247] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:31:44.491 [2024-07-23 15:25:39.712264] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:31:44.491 [2024-07-23 15:25:39.712358] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.491 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:44.491 "name": "raid_bdev1", 00:31:44.491 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:44.491 "strip_size_kb": 0, 00:31:44.491 "state": "online", 00:31:44.491 "raid_level": "raid1", 00:31:44.491 "superblock": true, 00:31:44.491 "num_base_bdevs": 2, 00:31:44.491 "num_base_bdevs_discovered": 2, 00:31:44.491 "num_base_bdevs_operational": 2, 00:31:44.491 "base_bdevs_list": [ 00:31:44.491 { 00:31:44.491 "name": "spare", 00:31:44.491 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:44.491 "is_configured": true, 00:31:44.491 "data_offset": 256, 00:31:44.491 "data_size": 7936 00:31:44.491 }, 00:31:44.491 { 00:31:44.491 "name": "BaseBdev2", 00:31:44.491 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:44.491 "is_configured": true, 00:31:44.491 "data_offset": 256, 00:31:44.491 "data_size": 7936 00:31:44.491 } 00:31:44.491 ] 00:31:44.491 }' 00:31:44.491 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:44.491 15:25:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.058 "name": "raid_bdev1", 00:31:45.058 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:45.058 "strip_size_kb": 0, 00:31:45.058 "state": "online", 00:31:45.058 "raid_level": "raid1", 00:31:45.058 "superblock": true, 00:31:45.058 "num_base_bdevs": 2, 00:31:45.058 "num_base_bdevs_discovered": 2, 00:31:45.058 "num_base_bdevs_operational": 2, 00:31:45.058 "base_bdevs_list": [ 00:31:45.058 { 00:31:45.058 "name": "spare", 00:31:45.058 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:45.058 "is_configured": true, 00:31:45.058 "data_offset": 256, 00:31:45.058 "data_size": 7936 00:31:45.058 }, 00:31:45.058 { 00:31:45.058 "name": "BaseBdev2", 00:31:45.058 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:45.058 "is_configured": true, 00:31:45.058 "data_offset": 256, 00:31:45.058 "data_size": 7936 00:31:45.058 } 00:31:45.058 ] 00:31:45.058 }' 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:45.058 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:45.317 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:45.317 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.317 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:45.317 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:45.575 [2024-07-23 15:25:40.816921] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.575 15:25:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.833 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:45.833 "name": "raid_bdev1", 00:31:45.833 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:45.833 "strip_size_kb": 0, 00:31:45.833 "state": "online", 00:31:45.833 "raid_level": "raid1", 00:31:45.833 "superblock": true, 00:31:45.833 "num_base_bdevs": 2, 00:31:45.833 "num_base_bdevs_discovered": 1, 00:31:45.833 "num_base_bdevs_operational": 1, 00:31:45.834 "base_bdevs_list": [ 00:31:45.834 { 00:31:45.834 "name": null, 00:31:45.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.834 "is_configured": false, 00:31:45.834 "data_offset": 256, 00:31:45.834 "data_size": 7936 00:31:45.834 }, 00:31:45.834 { 00:31:45.834 "name": "BaseBdev2", 00:31:45.834 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:45.834 "is_configured": true, 00:31:45.834 "data_offset": 256, 00:31:45.834 "data_size": 7936 00:31:45.834 } 00:31:45.834 ] 00:31:45.834 }' 00:31:45.834 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:45.834 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:46.091 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:46.091 [2024-07-23 15:25:41.453108] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:46.091 [2024-07-23 15:25:41.453465] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:46.091 [2024-07-23 15:25:41.453490] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:46.091 [2024-07-23 15:25:41.453546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:46.091 [2024-07-23 15:25:41.455266] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001bae70 00:31:46.091 [2024-07-23 15:25:41.457527] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:46.091 15:25:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:47.466 "name": "raid_bdev1", 00:31:47.466 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:47.466 "strip_size_kb": 0, 00:31:47.466 "state": "online", 00:31:47.466 "raid_level": "raid1", 00:31:47.466 "superblock": true, 00:31:47.466 "num_base_bdevs": 2, 00:31:47.466 "num_base_bdevs_discovered": 2, 00:31:47.466 "num_base_bdevs_operational": 2, 00:31:47.466 "process": { 00:31:47.466 "type": "rebuild", 00:31:47.466 "target": "spare", 00:31:47.466 "progress": { 00:31:47.466 "blocks": 3072, 00:31:47.466 "percent": 38 00:31:47.466 } 00:31:47.466 }, 00:31:47.466 "base_bdevs_list": [ 00:31:47.466 { 00:31:47.466 "name": "spare", 00:31:47.466 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:47.466 "is_configured": true, 00:31:47.466 "data_offset": 256, 00:31:47.466 "data_size": 7936 00:31:47.466 }, 00:31:47.466 { 00:31:47.466 "name": "BaseBdev2", 00:31:47.466 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:47.466 "is_configured": true, 00:31:47.466 "data_offset": 256, 00:31:47.466 "data_size": 7936 00:31:47.466 } 00:31:47.466 ] 00:31:47.466 }' 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:47.466 15:25:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:47.725 [2024-07-23 15:25:42.966961] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:47.725 [2024-07-23 15:25:43.067055] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:47.725 [2024-07-23 15:25:43.067325] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:47.725 [2024-07-23 15:25:43.067433] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:47.725 [2024-07-23 15:25:43.067471] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.725 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.983 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:47.983 "name": "raid_bdev1", 00:31:47.983 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:47.983 "strip_size_kb": 0, 00:31:47.983 "state": "online", 00:31:47.983 "raid_level": "raid1", 00:31:47.983 "superblock": true, 00:31:47.983 "num_base_bdevs": 2, 00:31:47.983 "num_base_bdevs_discovered": 1, 00:31:47.983 "num_base_bdevs_operational": 1, 00:31:47.983 "base_bdevs_list": [ 00:31:47.983 { 00:31:47.983 "name": null, 00:31:47.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.983 "is_configured": false, 00:31:47.983 "data_offset": 256, 00:31:47.983 "data_size": 7936 00:31:47.983 }, 00:31:47.983 { 00:31:47.984 "name": "BaseBdev2", 00:31:47.984 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:47.984 "is_configured": true, 00:31:47.984 "data_offset": 256, 00:31:47.984 "data_size": 7936 00:31:47.984 } 00:31:47.984 ] 00:31:47.984 }' 00:31:47.984 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:47.984 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:48.241 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:48.498 [2024-07-23 15:25:43.831217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:48.498 [2024-07-23 15:25:43.831292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:48.498 [2024-07-23 15:25:43.831324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:31:48.498 [2024-07-23 15:25:43.831336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:48.498 [2024-07-23 15:25:43.831562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:48.498 [2024-07-23 15:25:43.831578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:48.498 [2024-07-23 15:25:43.831658] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:48.498 [2024-07-23 15:25:43.831671] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:48.498 [2024-07-23 15:25:43.831686] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:48.498 [2024-07-23 15:25:43.831706] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:48.498 [2024-07-23 15:25:43.833453] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0001baf40 00:31:48.498 [2024-07-23 15:25:43.835610] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:48.498 spare 00:31:48.498 15:25:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:49.458 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:49.458 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:49.458 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:49.458 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:49.458 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:49.458 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.458 15:25:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.716 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:49.716 "name": "raid_bdev1", 00:31:49.716 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:49.716 "strip_size_kb": 0, 00:31:49.716 "state": "online", 00:31:49.716 "raid_level": "raid1", 00:31:49.716 "superblock": true, 00:31:49.716 "num_base_bdevs": 2, 00:31:49.716 "num_base_bdevs_discovered": 2, 00:31:49.716 "num_base_bdevs_operational": 2, 00:31:49.716 "process": { 00:31:49.716 "type": "rebuild", 00:31:49.716 "target": "spare", 00:31:49.716 "progress": { 00:31:49.716 "blocks": 3072, 00:31:49.716 "percent": 38 00:31:49.716 } 00:31:49.716 }, 00:31:49.716 "base_bdevs_list": [ 00:31:49.716 { 00:31:49.716 "name": "spare", 00:31:49.716 "uuid": "d2eaf5b4-dbd0-5cb0-a59c-cfed51471d2b", 00:31:49.716 "is_configured": true, 00:31:49.716 "data_offset": 256, 00:31:49.716 "data_size": 7936 00:31:49.716 }, 00:31:49.716 { 00:31:49.716 "name": "BaseBdev2", 00:31:49.716 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:49.716 "is_configured": true, 00:31:49.716 "data_offset": 256, 00:31:49.716 "data_size": 7936 00:31:49.716 } 00:31:49.716 ] 00:31:49.716 }' 00:31:49.716 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:49.716 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:49.716 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:49.716 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:49.716 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:49.974 [2024-07-23 15:25:45.272789] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:49.974 [2024-07-23 15:25:45.344510] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:49.974 [2024-07-23 15:25:45.344583] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:49.974 [2024-07-23 15:25:45.344600] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:49.974 [2024-07-23 15:25:45.344612] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:49.974 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:49.974 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:49.974 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:49.974 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:49.974 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:49.974 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:49.974 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:49.975 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:49.975 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:49.975 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:49.975 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.975 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.233 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:50.233 "name": "raid_bdev1", 00:31:50.233 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:50.233 "strip_size_kb": 0, 00:31:50.233 "state": "online", 00:31:50.233 "raid_level": "raid1", 00:31:50.233 "superblock": true, 00:31:50.233 "num_base_bdevs": 2, 00:31:50.233 "num_base_bdevs_discovered": 1, 00:31:50.233 "num_base_bdevs_operational": 1, 00:31:50.233 "base_bdevs_list": [ 00:31:50.233 { 00:31:50.233 "name": null, 00:31:50.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.233 "is_configured": false, 00:31:50.233 "data_offset": 256, 00:31:50.233 "data_size": 7936 00:31:50.233 }, 00:31:50.233 { 00:31:50.233 "name": "BaseBdev2", 00:31:50.233 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:50.233 "is_configured": true, 00:31:50.233 "data_offset": 256, 00:31:50.233 "data_size": 7936 00:31:50.233 } 00:31:50.233 ] 00:31:50.233 }' 00:31:50.233 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:50.233 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:50.799 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:50.799 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:50.799 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:50.799 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:50.799 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:50.799 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.799 15:25:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.799 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:50.799 "name": "raid_bdev1", 00:31:50.799 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:50.799 "strip_size_kb": 0, 00:31:50.799 "state": "online", 00:31:50.799 "raid_level": "raid1", 00:31:50.799 "superblock": true, 00:31:50.799 "num_base_bdevs": 2, 00:31:50.799 "num_base_bdevs_discovered": 1, 00:31:50.799 "num_base_bdevs_operational": 1, 00:31:50.799 "base_bdevs_list": [ 00:31:50.799 { 00:31:50.799 "name": null, 00:31:50.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.799 "is_configured": false, 00:31:50.799 "data_offset": 256, 00:31:50.799 "data_size": 7936 00:31:50.799 }, 00:31:50.799 { 00:31:50.799 "name": "BaseBdev2", 00:31:50.799 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:50.799 "is_configured": true, 00:31:50.799 "data_offset": 256, 00:31:50.799 "data_size": 7936 00:31:50.799 } 00:31:50.799 ] 00:31:50.799 }' 00:31:50.799 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:50.799 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:50.799 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:50.799 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:50.799 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:51.057 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:51.314 [2024-07-23 15:25:46.612214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:51.314 [2024-07-23 15:25:46.612463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:51.314 [2024-07-23 15:25:46.612504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:31:51.314 [2024-07-23 15:25:46.612519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:51.314 [2024-07-23 15:25:46.612718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:51.314 [2024-07-23 15:25:46.612737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:51.314 [2024-07-23 15:25:46.612831] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:51.314 [2024-07-23 15:25:46.612851] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:51.315 [2024-07-23 15:25:46.612861] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:51.315 BaseBdev1 00:31:51.315 15:25:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.248 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.506 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.506 "name": "raid_bdev1", 00:31:52.506 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:52.506 "strip_size_kb": 0, 00:31:52.506 "state": "online", 00:31:52.506 "raid_level": "raid1", 00:31:52.506 "superblock": true, 00:31:52.506 "num_base_bdevs": 2, 00:31:52.506 "num_base_bdevs_discovered": 1, 00:31:52.506 "num_base_bdevs_operational": 1, 00:31:52.506 "base_bdevs_list": [ 00:31:52.506 { 00:31:52.506 "name": null, 00:31:52.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:52.506 "is_configured": false, 00:31:52.506 "data_offset": 256, 00:31:52.506 "data_size": 7936 00:31:52.506 }, 00:31:52.506 { 00:31:52.506 "name": "BaseBdev2", 00:31:52.506 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:52.506 "is_configured": true, 00:31:52.506 "data_offset": 256, 00:31:52.506 "data_size": 7936 00:31:52.506 } 00:31:52.506 ] 00:31:52.506 }' 00:31:52.506 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.506 15:25:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:52.765 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:52.765 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:52.765 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:52.765 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:52.765 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:52.765 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.765 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:53.023 "name": "raid_bdev1", 00:31:53.023 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:53.023 "strip_size_kb": 0, 00:31:53.023 "state": "online", 00:31:53.023 "raid_level": "raid1", 00:31:53.023 "superblock": true, 00:31:53.023 "num_base_bdevs": 2, 00:31:53.023 "num_base_bdevs_discovered": 1, 00:31:53.023 "num_base_bdevs_operational": 1, 00:31:53.023 "base_bdevs_list": [ 00:31:53.023 { 00:31:53.023 "name": null, 00:31:53.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:53.023 "is_configured": false, 00:31:53.023 "data_offset": 256, 00:31:53.023 "data_size": 7936 00:31:53.023 }, 00:31:53.023 { 00:31:53.023 "name": "BaseBdev2", 00:31:53.023 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:53.023 "is_configured": true, 00:31:53.023 "data_offset": 256, 00:31:53.023 "data_size": 7936 00:31:53.023 } 00:31:53.023 ] 00:31:53.023 }' 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.023 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:53.281 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:53.281 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:53.281 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:53.281 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:53.281 [2024-07-23 15:25:48.696673] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:53.281 [2024-07-23 15:25:48.697062] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:53.281 [2024-07-23 15:25:48.697091] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:53.281 request: 00:31:53.281 { 00:31:53.281 "base_bdev": "BaseBdev1", 00:31:53.281 "raid_bdev": "raid_bdev1", 00:31:53.281 "method": "bdev_raid_add_base_bdev", 00:31:53.281 "req_id": 1 00:31:53.281 } 00:31:53.281 Got JSON-RPC error response 00:31:53.281 response: 00:31:53.281 { 00:31:53.281 "code": -22, 00:31:53.281 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:53.281 } 00:31:53.281 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:31:53.539 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:53.539 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:53.539 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:53.539 15:25:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:54.472 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:54.473 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.473 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.730 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.730 "name": "raid_bdev1", 00:31:54.730 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:54.730 "strip_size_kb": 0, 00:31:54.730 "state": "online", 00:31:54.730 "raid_level": "raid1", 00:31:54.730 "superblock": true, 00:31:54.730 "num_base_bdevs": 2, 00:31:54.730 "num_base_bdevs_discovered": 1, 00:31:54.730 "num_base_bdevs_operational": 1, 00:31:54.730 "base_bdevs_list": [ 00:31:54.730 { 00:31:54.730 "name": null, 00:31:54.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.730 "is_configured": false, 00:31:54.730 "data_offset": 256, 00:31:54.730 "data_size": 7936 00:31:54.730 }, 00:31:54.730 { 00:31:54.730 "name": "BaseBdev2", 00:31:54.730 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:54.730 "is_configured": true, 00:31:54.730 "data_offset": 256, 00:31:54.730 "data_size": 7936 00:31:54.730 } 00:31:54.730 ] 00:31:54.730 }' 00:31:54.731 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.731 15:25:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:54.988 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:54.988 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:54.988 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:54.988 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:54.988 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:54.988 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.988 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:55.247 "name": "raid_bdev1", 00:31:55.247 "uuid": "b8867050-41ef-419a-9988-0412b79983e8", 00:31:55.247 "strip_size_kb": 0, 00:31:55.247 "state": "online", 00:31:55.247 "raid_level": "raid1", 00:31:55.247 "superblock": true, 00:31:55.247 "num_base_bdevs": 2, 00:31:55.247 "num_base_bdevs_discovered": 1, 00:31:55.247 "num_base_bdevs_operational": 1, 00:31:55.247 "base_bdevs_list": [ 00:31:55.247 { 00:31:55.247 "name": null, 00:31:55.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.247 "is_configured": false, 00:31:55.247 "data_offset": 256, 00:31:55.247 "data_size": 7936 00:31:55.247 }, 00:31:55.247 { 00:31:55.247 "name": "BaseBdev2", 00:31:55.247 "uuid": "3b01ebc8-3045-5b11-a35a-1f2d123f07ca", 00:31:55.247 "is_configured": true, 00:31:55.247 "data_offset": 256, 00:31:55.247 "data_size": 7936 00:31:55.247 } 00:31:55.247 ] 00:31:55.247 }' 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 122589 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 122589 ']' 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 122589 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122589 00:31:55.247 killing process with pid 122589 00:31:55.247 Received shutdown signal, test time was about 60.000000 seconds 00:31:55.247 00:31:55.247 Latency(us) 00:31:55.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.247 =================================================================================================================== 00:31:55.247 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122589' 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 122589 00:31:55.247 [2024-07-23 15:25:50.616545] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:55.247 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 122589 00:31:55.247 [2024-07-23 15:25:50.616665] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:55.247 [2024-07-23 15:25:50.616715] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:55.247 [2024-07-23 15:25:50.616731] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:31:55.247 [2024-07-23 15:25:50.650233] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:55.505 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:31:55.505 00:31:55.505 real 0m26.085s 00:31:55.505 user 0m37.984s 00:31:55.505 sys 0m4.514s 00:31:55.505 ************************************ 00:31:55.505 END TEST raid_rebuild_test_sb_md_separate 00:31:55.505 ************************************ 00:31:55.505 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:55.505 15:25:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:55.505 15:25:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:55.505 15:25:50 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:31:55.505 15:25:50 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:31:55.505 15:25:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:55.505 15:25:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.505 15:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:55.763 ************************************ 00:31:55.763 START TEST raid_state_function_test_sb_md_interleaved 00:31:55.763 ************************************ 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=123377 00:31:55.763 Process raid pid: 123377 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123377' 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 123377 /var/tmp/spdk-raid.sock 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 123377 ']' 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:55.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:55.763 15:25:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:31:55.763 [2024-07-23 15:25:51.016102] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:31:55.763 [2024-07-23 15:25:51.016289] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.763 [2024-07-23 15:25:51.166621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.020 [2024-07-23 15:25:51.213935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.020 [2024-07-23 15:25:51.259214] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:56.584 15:25:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:56.584 15:25:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:31:56.584 15:25:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:56.842 [2024-07-23 15:25:52.105558] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:56.842 [2024-07-23 15:25:52.105639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:56.842 [2024-07-23 15:25:52.105657] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:56.842 [2024-07-23 15:25:52.105671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:56.842 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.118 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:57.118 "name": "Existed_Raid", 00:31:57.118 "uuid": "994eb3b0-7ba0-4c24-8b09-036d939c7da5", 00:31:57.118 "strip_size_kb": 0, 00:31:57.118 "state": "configuring", 00:31:57.118 "raid_level": "raid1", 00:31:57.118 "superblock": true, 00:31:57.118 "num_base_bdevs": 2, 00:31:57.118 "num_base_bdevs_discovered": 0, 00:31:57.118 "num_base_bdevs_operational": 2, 00:31:57.118 "base_bdevs_list": [ 00:31:57.118 { 00:31:57.118 "name": "BaseBdev1", 00:31:57.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.118 "is_configured": false, 00:31:57.118 "data_offset": 0, 00:31:57.118 "data_size": 0 00:31:57.118 }, 00:31:57.118 { 00:31:57.118 "name": "BaseBdev2", 00:31:57.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.118 "is_configured": false, 00:31:57.118 "data_offset": 0, 00:31:57.118 "data_size": 0 00:31:57.118 } 00:31:57.118 ] 00:31:57.118 }' 00:31:57.118 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:57.118 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:31:57.383 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:57.383 [2024-07-23 15:25:52.797574] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:57.383 [2024-07-23 15:25:52.797629] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005480 name Existed_Raid, state configuring 00:31:57.383 15:25:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:57.641 [2024-07-23 15:25:53.049669] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:57.641 [2024-07-23 15:25:53.049734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:57.641 [2024-07-23 15:25:53.049745] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:57.641 [2024-07-23 15:25:53.049758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:57.641 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:31:57.899 [2024-07-23 15:25:53.315505] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:57.899 BaseBdev1 00:31:57.899 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:58.157 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:58.157 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:58.157 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:31:58.157 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:58.157 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:58.157 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:58.157 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:58.415 [ 00:31:58.415 { 00:31:58.415 "name": "BaseBdev1", 00:31:58.415 "aliases": [ 00:31:58.415 "dffd5fa9-4d79-44f6-a9ac-5bacb93db5fb" 00:31:58.415 ], 00:31:58.415 "product_name": "Malloc disk", 00:31:58.415 "block_size": 4128, 00:31:58.415 "num_blocks": 8192, 00:31:58.415 "uuid": "dffd5fa9-4d79-44f6-a9ac-5bacb93db5fb", 00:31:58.415 "md_size": 32, 00:31:58.415 "md_interleave": true, 00:31:58.415 "dif_type": 0, 00:31:58.415 "assigned_rate_limits": { 00:31:58.415 "rw_ios_per_sec": 0, 00:31:58.415 "rw_mbytes_per_sec": 0, 00:31:58.415 "r_mbytes_per_sec": 0, 00:31:58.415 "w_mbytes_per_sec": 0 00:31:58.415 }, 00:31:58.415 "claimed": true, 00:31:58.415 "claim_type": "exclusive_write", 00:31:58.415 "zoned": false, 00:31:58.415 "supported_io_types": { 00:31:58.415 "read": true, 00:31:58.415 "write": true, 00:31:58.415 "unmap": true, 00:31:58.415 "flush": true, 00:31:58.415 "reset": true, 00:31:58.415 "nvme_admin": false, 00:31:58.415 "nvme_io": false, 00:31:58.415 "nvme_io_md": false, 00:31:58.415 "write_zeroes": true, 00:31:58.415 "zcopy": true, 00:31:58.415 "get_zone_info": false, 00:31:58.415 "zone_management": false, 00:31:58.415 "zone_append": false, 00:31:58.415 "compare": false, 00:31:58.415 "compare_and_write": false, 00:31:58.416 "abort": true, 00:31:58.416 "seek_hole": false, 00:31:58.416 "seek_data": false, 00:31:58.416 "copy": true, 00:31:58.416 "nvme_iov_md": false 00:31:58.416 }, 00:31:58.416 "memory_domains": [ 00:31:58.416 { 00:31:58.416 "dma_device_id": "system", 00:31:58.416 "dma_device_type": 1 00:31:58.416 }, 00:31:58.416 { 00:31:58.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.416 "dma_device_type": 2 00:31:58.416 } 00:31:58.416 ], 00:31:58.416 "driver_specific": {} 00:31:58.416 } 00:31:58.416 ] 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.416 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:58.674 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.674 "name": "Existed_Raid", 00:31:58.674 "uuid": "a5be9f4b-22c1-4cdf-a603-93700f61eed9", 00:31:58.674 "strip_size_kb": 0, 00:31:58.674 "state": "configuring", 00:31:58.674 "raid_level": "raid1", 00:31:58.674 "superblock": true, 00:31:58.674 "num_base_bdevs": 2, 00:31:58.674 "num_base_bdevs_discovered": 1, 00:31:58.674 "num_base_bdevs_operational": 2, 00:31:58.674 "base_bdevs_list": [ 00:31:58.674 { 00:31:58.674 "name": "BaseBdev1", 00:31:58.674 "uuid": "dffd5fa9-4d79-44f6-a9ac-5bacb93db5fb", 00:31:58.674 "is_configured": true, 00:31:58.674 "data_offset": 256, 00:31:58.674 "data_size": 7936 00:31:58.674 }, 00:31:58.674 { 00:31:58.674 "name": "BaseBdev2", 00:31:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.674 "is_configured": false, 00:31:58.674 "data_offset": 0, 00:31:58.674 "data_size": 0 00:31:58.674 } 00:31:58.674 ] 00:31:58.674 }' 00:31:58.674 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.674 15:25:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:31:58.933 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:59.191 [2024-07-23 15:25:54.415851] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:59.191 [2024-07-23 15:25:54.415920] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000005780 name Existed_Raid, state configuring 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:59.191 [2024-07-23 15:25:54.599963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:59.191 [2024-07-23 15:25:54.602411] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:59.191 [2024-07-23 15:25:54.602466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.191 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.449 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.449 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.449 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.449 "name": "Existed_Raid", 00:31:59.449 "uuid": "487f917e-ee48-4656-82ca-65c1ab0e7265", 00:31:59.449 "strip_size_kb": 0, 00:31:59.449 "state": "configuring", 00:31:59.449 "raid_level": "raid1", 00:31:59.449 "superblock": true, 00:31:59.449 "num_base_bdevs": 2, 00:31:59.449 "num_base_bdevs_discovered": 1, 00:31:59.449 "num_base_bdevs_operational": 2, 00:31:59.449 "base_bdevs_list": [ 00:31:59.449 { 00:31:59.449 "name": "BaseBdev1", 00:31:59.449 "uuid": "dffd5fa9-4d79-44f6-a9ac-5bacb93db5fb", 00:31:59.449 "is_configured": true, 00:31:59.449 "data_offset": 256, 00:31:59.449 "data_size": 7936 00:31:59.449 }, 00:31:59.449 { 00:31:59.449 "name": "BaseBdev2", 00:31:59.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.449 "is_configured": false, 00:31:59.449 "data_offset": 0, 00:31:59.449 "data_size": 0 00:31:59.449 } 00:31:59.449 ] 00:31:59.449 }' 00:31:59.449 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.449 15:25:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:31:59.707 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:31:59.966 [2024-07-23 15:25:55.319928] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:59.966 [2024-07-23 15:25:55.320111] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006080 00:31:59.966 [2024-07-23 15:25:55.320134] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:31:59.966 [2024-07-23 15:25:55.320265] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:31:59.966 [2024-07-23 15:25:55.320349] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006080 00:31:59.966 [2024-07-23 15:25:55.320378] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006080 00:31:59.966 [2024-07-23 15:25:55.320452] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.966 BaseBdev2 00:31:59.966 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:59.966 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:59.966 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:59.966 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:31:59.966 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:59.966 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:59.966 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:00.224 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:00.483 [ 00:32:00.483 { 00:32:00.483 "name": "BaseBdev2", 00:32:00.483 "aliases": [ 00:32:00.483 "f1808a61-c8a9-471a-a204-2544d958bd53" 00:32:00.483 ], 00:32:00.483 "product_name": "Malloc disk", 00:32:00.483 "block_size": 4128, 00:32:00.483 "num_blocks": 8192, 00:32:00.483 "uuid": "f1808a61-c8a9-471a-a204-2544d958bd53", 00:32:00.483 "md_size": 32, 00:32:00.483 "md_interleave": true, 00:32:00.483 "dif_type": 0, 00:32:00.483 "assigned_rate_limits": { 00:32:00.483 "rw_ios_per_sec": 0, 00:32:00.483 "rw_mbytes_per_sec": 0, 00:32:00.483 "r_mbytes_per_sec": 0, 00:32:00.483 "w_mbytes_per_sec": 0 00:32:00.483 }, 00:32:00.483 "claimed": true, 00:32:00.483 "claim_type": "exclusive_write", 00:32:00.483 "zoned": false, 00:32:00.483 "supported_io_types": { 00:32:00.483 "read": true, 00:32:00.483 "write": true, 00:32:00.483 "unmap": true, 00:32:00.483 "flush": true, 00:32:00.483 "reset": true, 00:32:00.483 "nvme_admin": false, 00:32:00.483 "nvme_io": false, 00:32:00.483 "nvme_io_md": false, 00:32:00.483 "write_zeroes": true, 00:32:00.483 "zcopy": true, 00:32:00.483 "get_zone_info": false, 00:32:00.483 "zone_management": false, 00:32:00.483 "zone_append": false, 00:32:00.483 "compare": false, 00:32:00.483 "compare_and_write": false, 00:32:00.483 "abort": true, 00:32:00.483 "seek_hole": false, 00:32:00.483 "seek_data": false, 00:32:00.483 "copy": true, 00:32:00.483 "nvme_iov_md": false 00:32:00.483 }, 00:32:00.483 "memory_domains": [ 00:32:00.483 { 00:32:00.483 "dma_device_id": "system", 00:32:00.483 "dma_device_type": 1 00:32:00.483 }, 00:32:00.483 { 00:32:00.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.483 "dma_device_type": 2 00:32:00.483 } 00:32:00.483 ], 00:32:00.483 "driver_specific": {} 00:32:00.483 } 00:32:00.483 ] 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.483 15:25:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:00.742 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:00.742 "name": "Existed_Raid", 00:32:00.742 "uuid": "487f917e-ee48-4656-82ca-65c1ab0e7265", 00:32:00.742 "strip_size_kb": 0, 00:32:00.742 "state": "online", 00:32:00.742 "raid_level": "raid1", 00:32:00.742 "superblock": true, 00:32:00.742 "num_base_bdevs": 2, 00:32:00.742 "num_base_bdevs_discovered": 2, 00:32:00.742 "num_base_bdevs_operational": 2, 00:32:00.742 "base_bdevs_list": [ 00:32:00.742 { 00:32:00.742 "name": "BaseBdev1", 00:32:00.742 "uuid": "dffd5fa9-4d79-44f6-a9ac-5bacb93db5fb", 00:32:00.742 "is_configured": true, 00:32:00.742 "data_offset": 256, 00:32:00.742 "data_size": 7936 00:32:00.742 }, 00:32:00.742 { 00:32:00.742 "name": "BaseBdev2", 00:32:00.742 "uuid": "f1808a61-c8a9-471a-a204-2544d958bd53", 00:32:00.742 "is_configured": true, 00:32:00.742 "data_offset": 256, 00:32:00.742 "data_size": 7936 00:32:00.742 } 00:32:00.742 ] 00:32:00.742 }' 00:32:00.742 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:00.742 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:01.000 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:01.000 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:01.000 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:01.000 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:01.000 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:01.000 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:32:01.000 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:01.000 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:01.257 [2024-07-23 15:25:56.464596] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:01.257 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:01.257 "name": "Existed_Raid", 00:32:01.257 "aliases": [ 00:32:01.257 "487f917e-ee48-4656-82ca-65c1ab0e7265" 00:32:01.257 ], 00:32:01.257 "product_name": "Raid Volume", 00:32:01.257 "block_size": 4128, 00:32:01.257 "num_blocks": 7936, 00:32:01.257 "uuid": "487f917e-ee48-4656-82ca-65c1ab0e7265", 00:32:01.257 "md_size": 32, 00:32:01.257 "md_interleave": true, 00:32:01.257 "dif_type": 0, 00:32:01.257 "assigned_rate_limits": { 00:32:01.257 "rw_ios_per_sec": 0, 00:32:01.257 "rw_mbytes_per_sec": 0, 00:32:01.257 "r_mbytes_per_sec": 0, 00:32:01.257 "w_mbytes_per_sec": 0 00:32:01.257 }, 00:32:01.257 "claimed": false, 00:32:01.257 "zoned": false, 00:32:01.257 "supported_io_types": { 00:32:01.257 "read": true, 00:32:01.257 "write": true, 00:32:01.257 "unmap": false, 00:32:01.257 "flush": false, 00:32:01.257 "reset": true, 00:32:01.257 "nvme_admin": false, 00:32:01.257 "nvme_io": false, 00:32:01.257 "nvme_io_md": false, 00:32:01.257 "write_zeroes": true, 00:32:01.257 "zcopy": false, 00:32:01.257 "get_zone_info": false, 00:32:01.257 "zone_management": false, 00:32:01.257 "zone_append": false, 00:32:01.257 "compare": false, 00:32:01.257 "compare_and_write": false, 00:32:01.257 "abort": false, 00:32:01.257 "seek_hole": false, 00:32:01.257 "seek_data": false, 00:32:01.257 "copy": false, 00:32:01.257 "nvme_iov_md": false 00:32:01.257 }, 00:32:01.257 "memory_domains": [ 00:32:01.257 { 00:32:01.257 "dma_device_id": "system", 00:32:01.257 "dma_device_type": 1 00:32:01.257 }, 00:32:01.257 { 00:32:01.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.257 "dma_device_type": 2 00:32:01.257 }, 00:32:01.257 { 00:32:01.257 "dma_device_id": "system", 00:32:01.258 "dma_device_type": 1 00:32:01.258 }, 00:32:01.258 { 00:32:01.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.258 "dma_device_type": 2 00:32:01.258 } 00:32:01.258 ], 00:32:01.258 "driver_specific": { 00:32:01.258 "raid": { 00:32:01.258 "uuid": "487f917e-ee48-4656-82ca-65c1ab0e7265", 00:32:01.258 "strip_size_kb": 0, 00:32:01.258 "state": "online", 00:32:01.258 "raid_level": "raid1", 00:32:01.258 "superblock": true, 00:32:01.258 "num_base_bdevs": 2, 00:32:01.258 "num_base_bdevs_discovered": 2, 00:32:01.258 "num_base_bdevs_operational": 2, 00:32:01.258 "base_bdevs_list": [ 00:32:01.258 { 00:32:01.258 "name": "BaseBdev1", 00:32:01.258 "uuid": "dffd5fa9-4d79-44f6-a9ac-5bacb93db5fb", 00:32:01.258 "is_configured": true, 00:32:01.258 "data_offset": 256, 00:32:01.258 "data_size": 7936 00:32:01.258 }, 00:32:01.258 { 00:32:01.258 "name": "BaseBdev2", 00:32:01.258 "uuid": "f1808a61-c8a9-471a-a204-2544d958bd53", 00:32:01.258 "is_configured": true, 00:32:01.258 "data_offset": 256, 00:32:01.258 "data_size": 7936 00:32:01.258 } 00:32:01.258 ] 00:32:01.258 } 00:32:01.258 } 00:32:01.258 }' 00:32:01.258 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:01.258 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:01.258 BaseBdev2' 00:32:01.258 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:01.258 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:01.258 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:01.515 "name": "BaseBdev1", 00:32:01.515 "aliases": [ 00:32:01.515 "dffd5fa9-4d79-44f6-a9ac-5bacb93db5fb" 00:32:01.515 ], 00:32:01.515 "product_name": "Malloc disk", 00:32:01.515 "block_size": 4128, 00:32:01.515 "num_blocks": 8192, 00:32:01.515 "uuid": "dffd5fa9-4d79-44f6-a9ac-5bacb93db5fb", 00:32:01.515 "md_size": 32, 00:32:01.515 "md_interleave": true, 00:32:01.515 "dif_type": 0, 00:32:01.515 "assigned_rate_limits": { 00:32:01.515 "rw_ios_per_sec": 0, 00:32:01.515 "rw_mbytes_per_sec": 0, 00:32:01.515 "r_mbytes_per_sec": 0, 00:32:01.515 "w_mbytes_per_sec": 0 00:32:01.515 }, 00:32:01.515 "claimed": true, 00:32:01.515 "claim_type": "exclusive_write", 00:32:01.515 "zoned": false, 00:32:01.515 "supported_io_types": { 00:32:01.515 "read": true, 00:32:01.515 "write": true, 00:32:01.515 "unmap": true, 00:32:01.515 "flush": true, 00:32:01.515 "reset": true, 00:32:01.515 "nvme_admin": false, 00:32:01.515 "nvme_io": false, 00:32:01.515 "nvme_io_md": false, 00:32:01.515 "write_zeroes": true, 00:32:01.515 "zcopy": true, 00:32:01.515 "get_zone_info": false, 00:32:01.515 "zone_management": false, 00:32:01.515 "zone_append": false, 00:32:01.515 "compare": false, 00:32:01.515 "compare_and_write": false, 00:32:01.515 "abort": true, 00:32:01.515 "seek_hole": false, 00:32:01.515 "seek_data": false, 00:32:01.515 "copy": true, 00:32:01.515 "nvme_iov_md": false 00:32:01.515 }, 00:32:01.515 "memory_domains": [ 00:32:01.515 { 00:32:01.515 "dma_device_id": "system", 00:32:01.515 "dma_device_type": 1 00:32:01.515 }, 00:32:01.515 { 00:32:01.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.515 "dma_device_type": 2 00:32:01.515 } 00:32:01.515 ], 00:32:01.515 "driver_specific": {} 00:32:01.515 }' 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:01.515 15:25:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:01.773 "name": "BaseBdev2", 00:32:01.773 "aliases": [ 00:32:01.773 "f1808a61-c8a9-471a-a204-2544d958bd53" 00:32:01.773 ], 00:32:01.773 "product_name": "Malloc disk", 00:32:01.773 "block_size": 4128, 00:32:01.773 "num_blocks": 8192, 00:32:01.773 "uuid": "f1808a61-c8a9-471a-a204-2544d958bd53", 00:32:01.773 "md_size": 32, 00:32:01.773 "md_interleave": true, 00:32:01.773 "dif_type": 0, 00:32:01.773 "assigned_rate_limits": { 00:32:01.773 "rw_ios_per_sec": 0, 00:32:01.773 "rw_mbytes_per_sec": 0, 00:32:01.773 "r_mbytes_per_sec": 0, 00:32:01.773 "w_mbytes_per_sec": 0 00:32:01.773 }, 00:32:01.773 "claimed": true, 00:32:01.773 "claim_type": "exclusive_write", 00:32:01.773 "zoned": false, 00:32:01.773 "supported_io_types": { 00:32:01.773 "read": true, 00:32:01.773 "write": true, 00:32:01.773 "unmap": true, 00:32:01.773 "flush": true, 00:32:01.773 "reset": true, 00:32:01.773 "nvme_admin": false, 00:32:01.773 "nvme_io": false, 00:32:01.773 "nvme_io_md": false, 00:32:01.773 "write_zeroes": true, 00:32:01.773 "zcopy": true, 00:32:01.773 "get_zone_info": false, 00:32:01.773 "zone_management": false, 00:32:01.773 "zone_append": false, 00:32:01.773 "compare": false, 00:32:01.773 "compare_and_write": false, 00:32:01.773 "abort": true, 00:32:01.773 "seek_hole": false, 00:32:01.773 "seek_data": false, 00:32:01.773 "copy": true, 00:32:01.773 "nvme_iov_md": false 00:32:01.773 }, 00:32:01.773 "memory_domains": [ 00:32:01.773 { 00:32:01.773 "dma_device_id": "system", 00:32:01.773 "dma_device_type": 1 00:32:01.773 }, 00:32:01.773 { 00:32:01.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.773 "dma_device_type": 2 00:32:01.773 } 00:32:01.773 ], 00:32:01.773 "driver_specific": {} 00:32:01.773 }' 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:01.773 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:02.031 [2024-07-23 15:25:57.396641] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.031 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.289 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:02.289 "name": "Existed_Raid", 00:32:02.289 "uuid": "487f917e-ee48-4656-82ca-65c1ab0e7265", 00:32:02.289 "strip_size_kb": 0, 00:32:02.289 "state": "online", 00:32:02.289 "raid_level": "raid1", 00:32:02.289 "superblock": true, 00:32:02.289 "num_base_bdevs": 2, 00:32:02.289 "num_base_bdevs_discovered": 1, 00:32:02.289 "num_base_bdevs_operational": 1, 00:32:02.289 "base_bdevs_list": [ 00:32:02.289 { 00:32:02.289 "name": null, 00:32:02.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.289 "is_configured": false, 00:32:02.289 "data_offset": 256, 00:32:02.289 "data_size": 7936 00:32:02.289 }, 00:32:02.289 { 00:32:02.289 "name": "BaseBdev2", 00:32:02.289 "uuid": "f1808a61-c8a9-471a-a204-2544d958bd53", 00:32:02.289 "is_configured": true, 00:32:02.289 "data_offset": 256, 00:32:02.289 "data_size": 7936 00:32:02.289 } 00:32:02.289 ] 00:32:02.289 }' 00:32:02.289 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:02.289 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:02.853 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:02.853 15:25:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:02.853 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.853 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:02.853 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:02.853 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:02.853 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:03.111 [2024-07-23 15:25:58.422008] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:03.111 [2024-07-23 15:25:58.422143] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:03.111 [2024-07-23 15:25:58.435431] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:03.111 [2024-07-23 15:25:58.435725] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:03.111 [2024-07-23 15:25:58.435761] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006080 name Existed_Raid, state offline 00:32:03.111 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:03.111 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:03.111 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.111 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 123377 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 123377 ']' 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 123377 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123377 00:32:03.369 killing process with pid 123377 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123377' 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 123377 00:32:03.369 [2024-07-23 15:25:58.674938] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:03.369 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 123377 00:32:03.369 [2024-07-23 15:25:58.675024] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:03.626 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:32:03.626 00:32:03.626 real 0m7.978s 00:32:03.626 user 0m13.366s 00:32:03.626 sys 0m1.740s 00:32:03.626 ************************************ 00:32:03.626 END TEST raid_state_function_test_sb_md_interleaved 00:32:03.626 ************************************ 00:32:03.626 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:03.626 15:25:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:03.626 15:25:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:03.626 15:25:58 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:32:03.626 15:25:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:32:03.626 15:25:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.626 15:25:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:03.626 ************************************ 00:32:03.626 START TEST raid_superblock_test_md_interleaved 00:32:03.626 ************************************ 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:32:03.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=123690 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 123690 /var/tmp/spdk-raid.sock 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 123690 ']' 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:03.626 15:25:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:03.626 [2024-07-23 15:25:59.053847] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:03.626 [2024-07-23 15:25:59.054069] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123690 ] 00:32:03.884 [2024-07-23 15:25:59.206925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.884 [2024-07-23 15:25:59.252418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.884 [2024-07-23 15:25:59.297829] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:04.818 15:25:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:32:04.818 malloc1 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:04.818 [2024-07-23 15:26:00.229881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:04.818 [2024-07-23 15:26:00.229974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:04.818 [2024-07-23 15:26:00.230007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:32:04.818 [2024-07-23 15:26:00.230027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:04.818 [2024-07-23 15:26:00.232679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:04.818 [2024-07-23 15:26:00.232722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:04.818 pt1 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:04.818 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:32:05.076 malloc2 00:32:05.076 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:05.352 [2024-07-23 15:26:00.657009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:05.352 [2024-07-23 15:26:00.657092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.352 [2024-07-23 15:26:00.657117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:32:05.352 [2024-07-23 15:26:00.657131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.352 [2024-07-23 15:26:00.659352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.352 [2024-07-23 15:26:00.659392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:05.352 pt2 00:32:05.352 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:05.352 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:05.352 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:32:05.617 [2024-07-23 15:26:00.849149] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:05.617 [2024-07-23 15:26:00.851326] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:05.617 [2024-07-23 15:26:00.851538] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006c80 00:32:05.617 [2024-07-23 15:26:00.851556] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:05.617 [2024-07-23 15:26:00.851681] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000001f80 00:32:05.617 [2024-07-23 15:26:00.851762] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006c80 00:32:05.617 [2024-07-23 15:26:00.851774] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000006c80 00:32:05.617 [2024-07-23 15:26:00.851847] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.617 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:05.617 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:05.617 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:05.617 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:05.618 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:05.618 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:05.618 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:05.618 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:05.618 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:05.618 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:05.618 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.618 15:26:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.877 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:05.877 "name": "raid_bdev1", 00:32:05.877 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:05.877 "strip_size_kb": 0, 00:32:05.877 "state": "online", 00:32:05.877 "raid_level": "raid1", 00:32:05.877 "superblock": true, 00:32:05.877 "num_base_bdevs": 2, 00:32:05.877 "num_base_bdevs_discovered": 2, 00:32:05.877 "num_base_bdevs_operational": 2, 00:32:05.877 "base_bdevs_list": [ 00:32:05.877 { 00:32:05.877 "name": "pt1", 00:32:05.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:05.877 "is_configured": true, 00:32:05.877 "data_offset": 256, 00:32:05.877 "data_size": 7936 00:32:05.877 }, 00:32:05.877 { 00:32:05.877 "name": "pt2", 00:32:05.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:05.877 "is_configured": true, 00:32:05.877 "data_offset": 256, 00:32:05.877 "data_size": 7936 00:32:05.877 } 00:32:05.877 ] 00:32:05.877 }' 00:32:05.877 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:05.877 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:06.135 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:32:06.135 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:06.135 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:06.135 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:06.135 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:06.135 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:32:06.135 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:06.135 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:06.135 [2024-07-23 15:26:01.565527] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:06.393 "name": "raid_bdev1", 00:32:06.393 "aliases": [ 00:32:06.393 "8be17162-40c1-467a-a282-25001d970ca8" 00:32:06.393 ], 00:32:06.393 "product_name": "Raid Volume", 00:32:06.393 "block_size": 4128, 00:32:06.393 "num_blocks": 7936, 00:32:06.393 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:06.393 "md_size": 32, 00:32:06.393 "md_interleave": true, 00:32:06.393 "dif_type": 0, 00:32:06.393 "assigned_rate_limits": { 00:32:06.393 "rw_ios_per_sec": 0, 00:32:06.393 "rw_mbytes_per_sec": 0, 00:32:06.393 "r_mbytes_per_sec": 0, 00:32:06.393 "w_mbytes_per_sec": 0 00:32:06.393 }, 00:32:06.393 "claimed": false, 00:32:06.393 "zoned": false, 00:32:06.393 "supported_io_types": { 00:32:06.393 "read": true, 00:32:06.393 "write": true, 00:32:06.393 "unmap": false, 00:32:06.393 "flush": false, 00:32:06.393 "reset": true, 00:32:06.393 "nvme_admin": false, 00:32:06.393 "nvme_io": false, 00:32:06.393 "nvme_io_md": false, 00:32:06.393 "write_zeroes": true, 00:32:06.393 "zcopy": false, 00:32:06.393 "get_zone_info": false, 00:32:06.393 "zone_management": false, 00:32:06.393 "zone_append": false, 00:32:06.393 "compare": false, 00:32:06.393 "compare_and_write": false, 00:32:06.393 "abort": false, 00:32:06.393 "seek_hole": false, 00:32:06.393 "seek_data": false, 00:32:06.393 "copy": false, 00:32:06.393 "nvme_iov_md": false 00:32:06.393 }, 00:32:06.393 "memory_domains": [ 00:32:06.393 { 00:32:06.393 "dma_device_id": "system", 00:32:06.393 "dma_device_type": 1 00:32:06.393 }, 00:32:06.393 { 00:32:06.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.393 "dma_device_type": 2 00:32:06.393 }, 00:32:06.393 { 00:32:06.393 "dma_device_id": "system", 00:32:06.393 "dma_device_type": 1 00:32:06.393 }, 00:32:06.393 { 00:32:06.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.393 "dma_device_type": 2 00:32:06.393 } 00:32:06.393 ], 00:32:06.393 "driver_specific": { 00:32:06.393 "raid": { 00:32:06.393 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:06.393 "strip_size_kb": 0, 00:32:06.393 "state": "online", 00:32:06.393 "raid_level": "raid1", 00:32:06.393 "superblock": true, 00:32:06.393 "num_base_bdevs": 2, 00:32:06.393 "num_base_bdevs_discovered": 2, 00:32:06.393 "num_base_bdevs_operational": 2, 00:32:06.393 "base_bdevs_list": [ 00:32:06.393 { 00:32:06.393 "name": "pt1", 00:32:06.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:06.393 "is_configured": true, 00:32:06.393 "data_offset": 256, 00:32:06.393 "data_size": 7936 00:32:06.393 }, 00:32:06.393 { 00:32:06.393 "name": "pt2", 00:32:06.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:06.393 "is_configured": true, 00:32:06.393 "data_offset": 256, 00:32:06.393 "data_size": 7936 00:32:06.393 } 00:32:06.393 ] 00:32:06.393 } 00:32:06.393 } 00:32:06.393 }' 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:06.393 pt2' 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:06.393 "name": "pt1", 00:32:06.393 "aliases": [ 00:32:06.393 "00000000-0000-0000-0000-000000000001" 00:32:06.393 ], 00:32:06.393 "product_name": "passthru", 00:32:06.393 "block_size": 4128, 00:32:06.393 "num_blocks": 8192, 00:32:06.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:06.393 "md_size": 32, 00:32:06.393 "md_interleave": true, 00:32:06.393 "dif_type": 0, 00:32:06.393 "assigned_rate_limits": { 00:32:06.393 "rw_ios_per_sec": 0, 00:32:06.393 "rw_mbytes_per_sec": 0, 00:32:06.393 "r_mbytes_per_sec": 0, 00:32:06.393 "w_mbytes_per_sec": 0 00:32:06.393 }, 00:32:06.393 "claimed": true, 00:32:06.393 "claim_type": "exclusive_write", 00:32:06.393 "zoned": false, 00:32:06.393 "supported_io_types": { 00:32:06.393 "read": true, 00:32:06.393 "write": true, 00:32:06.393 "unmap": true, 00:32:06.393 "flush": true, 00:32:06.393 "reset": true, 00:32:06.393 "nvme_admin": false, 00:32:06.393 "nvme_io": false, 00:32:06.393 "nvme_io_md": false, 00:32:06.393 "write_zeroes": true, 00:32:06.393 "zcopy": true, 00:32:06.393 "get_zone_info": false, 00:32:06.393 "zone_management": false, 00:32:06.393 "zone_append": false, 00:32:06.393 "compare": false, 00:32:06.393 "compare_and_write": false, 00:32:06.393 "abort": true, 00:32:06.393 "seek_hole": false, 00:32:06.393 "seek_data": false, 00:32:06.393 "copy": true, 00:32:06.393 "nvme_iov_md": false 00:32:06.393 }, 00:32:06.393 "memory_domains": [ 00:32:06.393 { 00:32:06.393 "dma_device_id": "system", 00:32:06.393 "dma_device_type": 1 00:32:06.393 }, 00:32:06.393 { 00:32:06.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.393 "dma_device_type": 2 00:32:06.393 } 00:32:06.393 ], 00:32:06.393 "driver_specific": { 00:32:06.393 "passthru": { 00:32:06.393 "name": "pt1", 00:32:06.393 "base_bdev_name": "malloc1" 00:32:06.393 } 00:32:06.393 } 00:32:06.393 }' 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:06.393 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:06.651 15:26:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:06.908 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:06.908 "name": "pt2", 00:32:06.908 "aliases": [ 00:32:06.908 "00000000-0000-0000-0000-000000000002" 00:32:06.908 ], 00:32:06.908 "product_name": "passthru", 00:32:06.908 "block_size": 4128, 00:32:06.908 "num_blocks": 8192, 00:32:06.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:06.908 "md_size": 32, 00:32:06.908 "md_interleave": true, 00:32:06.908 "dif_type": 0, 00:32:06.908 "assigned_rate_limits": { 00:32:06.908 "rw_ios_per_sec": 0, 00:32:06.908 "rw_mbytes_per_sec": 0, 00:32:06.908 "r_mbytes_per_sec": 0, 00:32:06.908 "w_mbytes_per_sec": 0 00:32:06.908 }, 00:32:06.908 "claimed": true, 00:32:06.908 "claim_type": "exclusive_write", 00:32:06.908 "zoned": false, 00:32:06.908 "supported_io_types": { 00:32:06.908 "read": true, 00:32:06.908 "write": true, 00:32:06.908 "unmap": true, 00:32:06.908 "flush": true, 00:32:06.908 "reset": true, 00:32:06.908 "nvme_admin": false, 00:32:06.908 "nvme_io": false, 00:32:06.908 "nvme_io_md": false, 00:32:06.908 "write_zeroes": true, 00:32:06.908 "zcopy": true, 00:32:06.908 "get_zone_info": false, 00:32:06.908 "zone_management": false, 00:32:06.908 "zone_append": false, 00:32:06.908 "compare": false, 00:32:06.908 "compare_and_write": false, 00:32:06.908 "abort": true, 00:32:06.908 "seek_hole": false, 00:32:06.908 "seek_data": false, 00:32:06.908 "copy": true, 00:32:06.908 "nvme_iov_md": false 00:32:06.908 }, 00:32:06.908 "memory_domains": [ 00:32:06.908 { 00:32:06.908 "dma_device_id": "system", 00:32:06.908 "dma_device_type": 1 00:32:06.908 }, 00:32:06.908 { 00:32:06.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.908 "dma_device_type": 2 00:32:06.908 } 00:32:06.908 ], 00:32:06.908 "driver_specific": { 00:32:06.908 "passthru": { 00:32:06.908 "name": "pt2", 00:32:06.909 "base_bdev_name": "malloc2" 00:32:06.909 } 00:32:06.909 } 00:32:06.909 }' 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:32:06.909 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:07.166 [2024-07-23 15:26:02.473687] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:07.166 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8be17162-40c1-467a-a282-25001d970ca8 00:32:07.166 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 8be17162-40c1-467a-a282-25001d970ca8 ']' 00:32:07.166 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:07.425 [2024-07-23 15:26:02.721434] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:07.425 [2024-07-23 15:26:02.721483] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:07.425 [2024-07-23 15:26:02.721583] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:07.425 [2024-07-23 15:26:02.721651] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:07.425 [2024-07-23 15:26:02.721670] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006c80 name raid_bdev1, state offline 00:32:07.425 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.425 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:32:07.683 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:32:07.683 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:32:07.683 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:07.683 15:26:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:07.940 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:07.940 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:07.940 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:07.940 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:08.197 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:08.454 [2024-07-23 15:26:03.761680] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:08.454 [2024-07-23 15:26:03.763925] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:08.454 [2024-07-23 15:26:03.764001] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:08.454 [2024-07-23 15:26:03.764094] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:08.454 [2024-07-23 15:26:03.764117] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:08.454 [2024-07-23 15:26:03.764128] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007280 name raid_bdev1, state configuring 00:32:08.454 request: 00:32:08.454 { 00:32:08.454 "name": "raid_bdev1", 00:32:08.454 "raid_level": "raid1", 00:32:08.454 "base_bdevs": [ 00:32:08.454 "malloc1", 00:32:08.454 "malloc2" 00:32:08.454 ], 00:32:08.454 "superblock": false, 00:32:08.454 "method": "bdev_raid_create", 00:32:08.454 "req_id": 1 00:32:08.454 } 00:32:08.454 Got JSON-RPC error response 00:32:08.454 response: 00:32:08.454 { 00:32:08.454 "code": -17, 00:32:08.454 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:08.454 } 00:32:08.454 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:32:08.454 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:08.454 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:08.454 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:08.454 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.454 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:32:08.712 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:32:08.712 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:32:08.712 15:26:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:08.969 [2024-07-23 15:26:04.181712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:08.969 [2024-07-23 15:26:04.181805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:08.969 [2024-07-23 15:26:04.181831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:32:08.969 [2024-07-23 15:26:04.181843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:08.969 [2024-07-23 15:26:04.184037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:08.969 [2024-07-23 15:26:04.184079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:08.969 [2024-07-23 15:26:04.184147] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:08.969 [2024-07-23 15:26:04.184202] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:08.969 pt1 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:08.969 "name": "raid_bdev1", 00:32:08.969 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:08.969 "strip_size_kb": 0, 00:32:08.969 "state": "configuring", 00:32:08.969 "raid_level": "raid1", 00:32:08.969 "superblock": true, 00:32:08.969 "num_base_bdevs": 2, 00:32:08.969 "num_base_bdevs_discovered": 1, 00:32:08.969 "num_base_bdevs_operational": 2, 00:32:08.969 "base_bdevs_list": [ 00:32:08.969 { 00:32:08.969 "name": "pt1", 00:32:08.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:08.969 "is_configured": true, 00:32:08.969 "data_offset": 256, 00:32:08.969 "data_size": 7936 00:32:08.969 }, 00:32:08.969 { 00:32:08.969 "name": null, 00:32:08.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:08.969 "is_configured": false, 00:32:08.969 "data_offset": 256, 00:32:08.969 "data_size": 7936 00:32:08.969 } 00:32:08.969 ] 00:32:08.969 }' 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:08.969 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:09.535 [2024-07-23 15:26:04.877904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:09.535 [2024-07-23 15:26:04.877982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:09.535 [2024-07-23 15:26:04.878009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:32:09.535 [2024-07-23 15:26:04.878021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:09.535 [2024-07-23 15:26:04.878191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:09.535 [2024-07-23 15:26:04.878206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:09.535 [2024-07-23 15:26:04.878264] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:09.535 [2024-07-23 15:26:04.878293] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:09.535 [2024-07-23 15:26:04.878393] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:32:09.535 [2024-07-23 15:26:04.878403] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:09.535 [2024-07-23 15:26:04.878477] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:32:09.535 [2024-07-23 15:26:04.878532] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:32:09.535 [2024-07-23 15:26:04.878544] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:32:09.535 [2024-07-23 15:26:04.878593] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:09.535 pt2 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:09.535 15:26:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:09.793 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:09.793 "name": "raid_bdev1", 00:32:09.793 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:09.793 "strip_size_kb": 0, 00:32:09.793 "state": "online", 00:32:09.793 "raid_level": "raid1", 00:32:09.793 "superblock": true, 00:32:09.793 "num_base_bdevs": 2, 00:32:09.793 "num_base_bdevs_discovered": 2, 00:32:09.793 "num_base_bdevs_operational": 2, 00:32:09.793 "base_bdevs_list": [ 00:32:09.793 { 00:32:09.793 "name": "pt1", 00:32:09.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:09.793 "is_configured": true, 00:32:09.793 "data_offset": 256, 00:32:09.793 "data_size": 7936 00:32:09.793 }, 00:32:09.793 { 00:32:09.793 "name": "pt2", 00:32:09.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:09.793 "is_configured": true, 00:32:09.793 "data_offset": 256, 00:32:09.793 "data_size": 7936 00:32:09.793 } 00:32:09.793 ] 00:32:09.793 }' 00:32:09.793 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:09.793 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:10.359 [2024-07-23 15:26:05.746400] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:10.359 "name": "raid_bdev1", 00:32:10.359 "aliases": [ 00:32:10.359 "8be17162-40c1-467a-a282-25001d970ca8" 00:32:10.359 ], 00:32:10.359 "product_name": "Raid Volume", 00:32:10.359 "block_size": 4128, 00:32:10.359 "num_blocks": 7936, 00:32:10.359 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:10.359 "md_size": 32, 00:32:10.359 "md_interleave": true, 00:32:10.359 "dif_type": 0, 00:32:10.359 "assigned_rate_limits": { 00:32:10.359 "rw_ios_per_sec": 0, 00:32:10.359 "rw_mbytes_per_sec": 0, 00:32:10.359 "r_mbytes_per_sec": 0, 00:32:10.359 "w_mbytes_per_sec": 0 00:32:10.359 }, 00:32:10.359 "claimed": false, 00:32:10.359 "zoned": false, 00:32:10.359 "supported_io_types": { 00:32:10.359 "read": true, 00:32:10.359 "write": true, 00:32:10.359 "unmap": false, 00:32:10.359 "flush": false, 00:32:10.359 "reset": true, 00:32:10.359 "nvme_admin": false, 00:32:10.359 "nvme_io": false, 00:32:10.359 "nvme_io_md": false, 00:32:10.359 "write_zeroes": true, 00:32:10.359 "zcopy": false, 00:32:10.359 "get_zone_info": false, 00:32:10.359 "zone_management": false, 00:32:10.359 "zone_append": false, 00:32:10.359 "compare": false, 00:32:10.359 "compare_and_write": false, 00:32:10.359 "abort": false, 00:32:10.359 "seek_hole": false, 00:32:10.359 "seek_data": false, 00:32:10.359 "copy": false, 00:32:10.359 "nvme_iov_md": false 00:32:10.359 }, 00:32:10.359 "memory_domains": [ 00:32:10.359 { 00:32:10.359 "dma_device_id": "system", 00:32:10.359 "dma_device_type": 1 00:32:10.359 }, 00:32:10.359 { 00:32:10.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.359 "dma_device_type": 2 00:32:10.359 }, 00:32:10.359 { 00:32:10.359 "dma_device_id": "system", 00:32:10.359 "dma_device_type": 1 00:32:10.359 }, 00:32:10.359 { 00:32:10.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.359 "dma_device_type": 2 00:32:10.359 } 00:32:10.359 ], 00:32:10.359 "driver_specific": { 00:32:10.359 "raid": { 00:32:10.359 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:10.359 "strip_size_kb": 0, 00:32:10.359 "state": "online", 00:32:10.359 "raid_level": "raid1", 00:32:10.359 "superblock": true, 00:32:10.359 "num_base_bdevs": 2, 00:32:10.359 "num_base_bdevs_discovered": 2, 00:32:10.359 "num_base_bdevs_operational": 2, 00:32:10.359 "base_bdevs_list": [ 00:32:10.359 { 00:32:10.359 "name": "pt1", 00:32:10.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:10.359 "is_configured": true, 00:32:10.359 "data_offset": 256, 00:32:10.359 "data_size": 7936 00:32:10.359 }, 00:32:10.359 { 00:32:10.359 "name": "pt2", 00:32:10.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:10.359 "is_configured": true, 00:32:10.359 "data_offset": 256, 00:32:10.359 "data_size": 7936 00:32:10.359 } 00:32:10.359 ] 00:32:10.359 } 00:32:10.359 } 00:32:10.359 }' 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:10.359 pt2' 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:10.359 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:10.633 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:10.633 "name": "pt1", 00:32:10.633 "aliases": [ 00:32:10.633 "00000000-0000-0000-0000-000000000001" 00:32:10.633 ], 00:32:10.633 "product_name": "passthru", 00:32:10.633 "block_size": 4128, 00:32:10.633 "num_blocks": 8192, 00:32:10.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:10.633 "md_size": 32, 00:32:10.633 "md_interleave": true, 00:32:10.633 "dif_type": 0, 00:32:10.633 "assigned_rate_limits": { 00:32:10.633 "rw_ios_per_sec": 0, 00:32:10.633 "rw_mbytes_per_sec": 0, 00:32:10.633 "r_mbytes_per_sec": 0, 00:32:10.633 "w_mbytes_per_sec": 0 00:32:10.633 }, 00:32:10.633 "claimed": true, 00:32:10.633 "claim_type": "exclusive_write", 00:32:10.633 "zoned": false, 00:32:10.633 "supported_io_types": { 00:32:10.633 "read": true, 00:32:10.633 "write": true, 00:32:10.633 "unmap": true, 00:32:10.633 "flush": true, 00:32:10.633 "reset": true, 00:32:10.633 "nvme_admin": false, 00:32:10.633 "nvme_io": false, 00:32:10.633 "nvme_io_md": false, 00:32:10.633 "write_zeroes": true, 00:32:10.633 "zcopy": true, 00:32:10.633 "get_zone_info": false, 00:32:10.633 "zone_management": false, 00:32:10.633 "zone_append": false, 00:32:10.633 "compare": false, 00:32:10.633 "compare_and_write": false, 00:32:10.633 "abort": true, 00:32:10.633 "seek_hole": false, 00:32:10.633 "seek_data": false, 00:32:10.633 "copy": true, 00:32:10.633 "nvme_iov_md": false 00:32:10.633 }, 00:32:10.633 "memory_domains": [ 00:32:10.633 { 00:32:10.633 "dma_device_id": "system", 00:32:10.633 "dma_device_type": 1 00:32:10.633 }, 00:32:10.633 { 00:32:10.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.633 "dma_device_type": 2 00:32:10.633 } 00:32:10.633 ], 00:32:10.633 "driver_specific": { 00:32:10.633 "passthru": { 00:32:10.633 "name": "pt1", 00:32:10.633 "base_bdev_name": "malloc1" 00:32:10.633 } 00:32:10.633 } 00:32:10.633 }' 00:32:10.633 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:10.633 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:10.633 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:10.633 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:10.633 15:26:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:10.633 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:10.891 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:10.891 "name": "pt2", 00:32:10.891 "aliases": [ 00:32:10.891 "00000000-0000-0000-0000-000000000002" 00:32:10.891 ], 00:32:10.891 "product_name": "passthru", 00:32:10.891 "block_size": 4128, 00:32:10.891 "num_blocks": 8192, 00:32:10.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:10.891 "md_size": 32, 00:32:10.891 "md_interleave": true, 00:32:10.891 "dif_type": 0, 00:32:10.891 "assigned_rate_limits": { 00:32:10.891 "rw_ios_per_sec": 0, 00:32:10.891 "rw_mbytes_per_sec": 0, 00:32:10.891 "r_mbytes_per_sec": 0, 00:32:10.891 "w_mbytes_per_sec": 0 00:32:10.891 }, 00:32:10.891 "claimed": true, 00:32:10.891 "claim_type": "exclusive_write", 00:32:10.891 "zoned": false, 00:32:10.891 "supported_io_types": { 00:32:10.891 "read": true, 00:32:10.891 "write": true, 00:32:10.891 "unmap": true, 00:32:10.891 "flush": true, 00:32:10.891 "reset": true, 00:32:10.891 "nvme_admin": false, 00:32:10.891 "nvme_io": false, 00:32:10.891 "nvme_io_md": false, 00:32:10.891 "write_zeroes": true, 00:32:10.891 "zcopy": true, 00:32:10.891 "get_zone_info": false, 00:32:10.891 "zone_management": false, 00:32:10.891 "zone_append": false, 00:32:10.891 "compare": false, 00:32:10.891 "compare_and_write": false, 00:32:10.891 "abort": true, 00:32:10.891 "seek_hole": false, 00:32:10.891 "seek_data": false, 00:32:10.891 "copy": true, 00:32:10.891 "nvme_iov_md": false 00:32:10.891 }, 00:32:10.891 "memory_domains": [ 00:32:10.891 { 00:32:10.891 "dma_device_id": "system", 00:32:10.891 "dma_device_type": 1 00:32:10.891 }, 00:32:10.891 { 00:32:10.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.891 "dma_device_type": 2 00:32:10.891 } 00:32:10.891 ], 00:32:10.891 "driver_specific": { 00:32:10.891 "passthru": { 00:32:10.891 "name": "pt2", 00:32:10.891 "base_bdev_name": "malloc2" 00:32:10.891 } 00:32:10.891 } 00:32:10.891 }' 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:10.892 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:11.151 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:11.151 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:32:11.151 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:11.151 [2024-07-23 15:26:06.574589] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 8be17162-40c1-467a-a282-25001d970ca8 '!=' 8be17162-40c1-467a-a282-25001d970ca8 ']' 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:11.408 [2024-07-23 15:26:06.770439] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:11.408 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.665 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:11.665 "name": "raid_bdev1", 00:32:11.665 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:11.665 "strip_size_kb": 0, 00:32:11.665 "state": "online", 00:32:11.665 "raid_level": "raid1", 00:32:11.665 "superblock": true, 00:32:11.665 "num_base_bdevs": 2, 00:32:11.665 "num_base_bdevs_discovered": 1, 00:32:11.665 "num_base_bdevs_operational": 1, 00:32:11.665 "base_bdevs_list": [ 00:32:11.665 { 00:32:11.665 "name": null, 00:32:11.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.665 "is_configured": false, 00:32:11.665 "data_offset": 256, 00:32:11.665 "data_size": 7936 00:32:11.665 }, 00:32:11.665 { 00:32:11.665 "name": "pt2", 00:32:11.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:11.665 "is_configured": true, 00:32:11.665 "data_offset": 256, 00:32:11.665 "data_size": 7936 00:32:11.665 } 00:32:11.665 ] 00:32:11.665 }' 00:32:11.665 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:11.665 15:26:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:11.922 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:12.179 [2024-07-23 15:26:07.402509] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:12.179 [2024-07-23 15:26:07.402728] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:12.179 [2024-07-23 15:26:07.402839] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:12.179 [2024-07-23 15:26:07.402897] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:12.179 [2024-07-23 15:26:07.402910] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:32:12.179 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:32:12.179 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.437 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:32:12.437 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:32:12.437 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:32:12.437 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:12.437 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:12.438 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:32:12.438 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:12.438 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:32:12.438 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:32:12.438 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:32:12.438 15:26:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:12.695 [2024-07-23 15:26:08.022620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:12.695 [2024-07-23 15:26:08.023023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:12.695 [2024-07-23 15:26:08.023065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008480 00:32:12.695 [2024-07-23 15:26:08.023077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:12.695 [2024-07-23 15:26:08.025261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:12.695 [2024-07-23 15:26:08.025302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:12.695 [2024-07-23 15:26:08.025367] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:12.695 [2024-07-23 15:26:08.025400] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:12.695 [2024-07-23 15:26:08.025470] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008a80 00:32:12.695 [2024-07-23 15:26:08.025480] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:12.695 [2024-07-23 15:26:08.025579] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:32:12.695 [2024-07-23 15:26:08.025650] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008a80 00:32:12.695 [2024-07-23 15:26:08.025664] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008a80 00:32:12.695 [2024-07-23 15:26:08.025713] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.695 pt2 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.696 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.955 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:12.955 "name": "raid_bdev1", 00:32:12.955 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:12.955 "strip_size_kb": 0, 00:32:12.955 "state": "online", 00:32:12.955 "raid_level": "raid1", 00:32:12.955 "superblock": true, 00:32:12.955 "num_base_bdevs": 2, 00:32:12.955 "num_base_bdevs_discovered": 1, 00:32:12.955 "num_base_bdevs_operational": 1, 00:32:12.955 "base_bdevs_list": [ 00:32:12.955 { 00:32:12.955 "name": null, 00:32:12.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.955 "is_configured": false, 00:32:12.955 "data_offset": 256, 00:32:12.955 "data_size": 7936 00:32:12.955 }, 00:32:12.955 { 00:32:12.955 "name": "pt2", 00:32:12.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:12.955 "is_configured": true, 00:32:12.955 "data_offset": 256, 00:32:12.955 "data_size": 7936 00:32:12.955 } 00:32:12.955 ] 00:32:12.955 }' 00:32:12.955 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:12.955 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:13.273 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:13.273 [2024-07-23 15:26:08.690756] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:13.273 [2024-07-23 15:26:08.690815] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:13.273 [2024-07-23 15:26:08.690895] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:13.273 [2024-07-23 15:26:08.690948] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:13.273 [2024-07-23 15:26:08.690963] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008a80 name raid_bdev1, state offline 00:32:13.531 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.531 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:32:13.789 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:32:13.789 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:32:13.789 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:32:13.789 15:26:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:13.789 [2024-07-23 15:26:09.130856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:13.789 [2024-07-23 15:26:09.130955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:13.789 [2024-07-23 15:26:09.130982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:32:13.789 [2024-07-23 15:26:09.130996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:13.789 [2024-07-23 15:26:09.133406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:13.789 [2024-07-23 15:26:09.133567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:13.789 [2024-07-23 15:26:09.133707] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:13.789 [2024-07-23 15:26:09.133774] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:13.789 [2024-07-23 15:26:09.134040] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:13.789 [2024-07-23 15:26:09.134106] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:13.789 [2024-07-23 15:26:09.134214] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state configuring 00:32:13.789 [2024-07-23 15:26:09.134304] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:13.789 [2024-07-23 15:26:09.134401] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009680 00:32:13.789 [2024-07-23 15:26:09.134458] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:13.789 [2024-07-23 15:26:09.134568] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:32:13.789 [2024-07-23 15:26:09.134728] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009680 00:32:13.789 [2024-07-23 15:26:09.134767] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009680 00:32:13.789 pt1 00:32:13.789 [2024-07-23 15:26:09.135005] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:13.789 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:13.790 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:13.790 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:13.790 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.790 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.048 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:14.048 "name": "raid_bdev1", 00:32:14.048 "uuid": "8be17162-40c1-467a-a282-25001d970ca8", 00:32:14.048 "strip_size_kb": 0, 00:32:14.048 "state": "online", 00:32:14.048 "raid_level": "raid1", 00:32:14.048 "superblock": true, 00:32:14.048 "num_base_bdevs": 2, 00:32:14.048 "num_base_bdevs_discovered": 1, 00:32:14.048 "num_base_bdevs_operational": 1, 00:32:14.048 "base_bdevs_list": [ 00:32:14.048 { 00:32:14.048 "name": null, 00:32:14.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.048 "is_configured": false, 00:32:14.048 "data_offset": 256, 00:32:14.048 "data_size": 7936 00:32:14.048 }, 00:32:14.048 { 00:32:14.048 "name": "pt2", 00:32:14.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:14.048 "is_configured": true, 00:32:14.048 "data_offset": 256, 00:32:14.048 "data_size": 7936 00:32:14.048 } 00:32:14.048 ] 00:32:14.048 }' 00:32:14.048 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:14.048 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:14.306 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:14.307 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:14.565 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:32:14.565 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:14.565 15:26:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:32:14.824 [2024-07-23 15:26:10.107503] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 8be17162-40c1-467a-a282-25001d970ca8 '!=' 8be17162-40c1-467a-a282-25001d970ca8 ']' 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 123690 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 123690 ']' 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 123690 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123690 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:14.824 killing process with pid 123690 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123690' 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 123690 00:32:14.824 [2024-07-23 15:26:10.169968] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:14.824 [2024-07-23 15:26:10.170052] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:14.824 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 123690 00:32:14.824 [2024-07-23 15:26:10.170108] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:14.824 [2024-07-23 15:26:10.170119] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009680 name raid_bdev1, state offline 00:32:14.824 [2024-07-23 15:26:10.195434] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:15.083 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:32:15.083 00:32:15.083 real 0m11.451s 00:32:15.083 user 0m19.501s 00:32:15.083 sys 0m2.567s 00:32:15.083 ************************************ 00:32:15.083 END TEST raid_superblock_test_md_interleaved 00:32:15.083 ************************************ 00:32:15.083 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:15.083 15:26:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:15.083 15:26:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:15.083 15:26:10 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:32:15.083 15:26:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:15.083 15:26:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.083 15:26:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.083 ************************************ 00:32:15.083 START TEST raid_rebuild_test_sb_md_interleaved 00:32:15.083 ************************************ 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=124147 00:32:15.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 124147 /var/tmp/spdk-raid.sock 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 124147 ']' 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:15.083 15:26:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:15.342 [2024-07-23 15:26:10.575734] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:15.342 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:15.342 Zero copy mechanism will not be used. 00:32:15.342 [2024-07-23 15:26:10.576205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124147 ] 00:32:15.342 [2024-07-23 15:26:10.727820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.600 [2024-07-23 15:26:10.777808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.600 [2024-07-23 15:26:10.823087] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:16.168 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.168 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:32:16.168 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:16.168 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:32:16.427 BaseBdev1_malloc 00:32:16.427 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:16.427 [2024-07-23 15:26:11.834864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:16.427 [2024-07-23 15:26:11.834944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.427 [2024-07-23 15:26:11.834986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000005a80 00:32:16.427 [2024-07-23 15:26:11.835005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.427 [2024-07-23 15:26:11.837283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.427 [2024-07-23 15:26:11.837328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:16.427 BaseBdev1 00:32:16.427 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:16.427 15:26:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:32:16.686 BaseBdev2_malloc 00:32:16.686 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:16.945 [2024-07-23 15:26:12.263552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:16.945 [2024-07-23 15:26:12.263786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.945 [2024-07-23 15:26:12.263843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006680 00:32:16.945 [2024-07-23 15:26:12.263855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.945 [2024-07-23 15:26:12.266093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.945 [2024-07-23 15:26:12.266134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:16.945 BaseBdev2 00:32:16.945 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:32:17.204 spare_malloc 00:32:17.204 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:17.204 spare_delay 00:32:17.462 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:17.462 [2024-07-23 15:26:12.789458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:17.462 [2024-07-23 15:26:12.789554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:17.462 [2024-07-23 15:26:12.789591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007880 00:32:17.462 [2024-07-23 15:26:12.789603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:17.462 [2024-07-23 15:26:12.791905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:17.462 [2024-07-23 15:26:12.791946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:17.462 spare 00:32:17.462 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:32:17.721 [2024-07-23 15:26:12.965582] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:17.721 [2024-07-23 15:26:12.968034] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:17.721 [2024-07-23 15:26:12.968250] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007e80 00:32:17.721 [2024-07-23 15:26:12.968265] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:17.721 [2024-07-23 15:26:12.968382] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002050 00:32:17.721 [2024-07-23 15:26:12.968470] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007e80 00:32:17.721 [2024-07-23 15:26:12.968489] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007e80 00:32:17.721 [2024-07-23 15:26:12.968559] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.721 15:26:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.980 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:17.980 "name": "raid_bdev1", 00:32:17.980 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:17.980 "strip_size_kb": 0, 00:32:17.981 "state": "online", 00:32:17.981 "raid_level": "raid1", 00:32:17.981 "superblock": true, 00:32:17.981 "num_base_bdevs": 2, 00:32:17.981 "num_base_bdevs_discovered": 2, 00:32:17.981 "num_base_bdevs_operational": 2, 00:32:17.981 "base_bdevs_list": [ 00:32:17.981 { 00:32:17.981 "name": "BaseBdev1", 00:32:17.981 "uuid": "a9439e71-a755-5a63-b80a-00382aace265", 00:32:17.981 "is_configured": true, 00:32:17.981 "data_offset": 256, 00:32:17.981 "data_size": 7936 00:32:17.981 }, 00:32:17.981 { 00:32:17.981 "name": "BaseBdev2", 00:32:17.981 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:17.981 "is_configured": true, 00:32:17.981 "data_offset": 256, 00:32:17.981 "data_size": 7936 00:32:17.981 } 00:32:17.981 ] 00:32:17.981 }' 00:32:17.981 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:17.981 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:18.239 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:18.239 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:18.497 [2024-07-23 15:26:13.673950] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:18.497 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:32:18.497 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:18.497 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.755 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:32:18.756 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:18.756 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:32:18.756 15:26:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:19.014 [2024-07-23 15:26:14.193808] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.014 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.273 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:19.273 "name": "raid_bdev1", 00:32:19.273 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:19.273 "strip_size_kb": 0, 00:32:19.273 "state": "online", 00:32:19.273 "raid_level": "raid1", 00:32:19.273 "superblock": true, 00:32:19.273 "num_base_bdevs": 2, 00:32:19.273 "num_base_bdevs_discovered": 1, 00:32:19.273 "num_base_bdevs_operational": 1, 00:32:19.273 "base_bdevs_list": [ 00:32:19.273 { 00:32:19.273 "name": null, 00:32:19.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.273 "is_configured": false, 00:32:19.273 "data_offset": 256, 00:32:19.273 "data_size": 7936 00:32:19.273 }, 00:32:19.273 { 00:32:19.273 "name": "BaseBdev2", 00:32:19.273 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:19.273 "is_configured": true, 00:32:19.273 "data_offset": 256, 00:32:19.273 "data_size": 7936 00:32:19.273 } 00:32:19.273 ] 00:32:19.273 }' 00:32:19.273 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:19.273 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:19.532 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:19.532 [2024-07-23 15:26:14.965992] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:19.791 [2024-07-23 15:26:14.969309] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002120 00:32:19.791 [2024-07-23 15:26:14.971586] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:19.791 15:26:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:20.725 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:20.725 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:20.725 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:20.725 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:20.725 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:20.725 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.725 15:26:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.983 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:20.983 "name": "raid_bdev1", 00:32:20.983 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:20.983 "strip_size_kb": 0, 00:32:20.983 "state": "online", 00:32:20.983 "raid_level": "raid1", 00:32:20.983 "superblock": true, 00:32:20.983 "num_base_bdevs": 2, 00:32:20.983 "num_base_bdevs_discovered": 2, 00:32:20.983 "num_base_bdevs_operational": 2, 00:32:20.983 "process": { 00:32:20.983 "type": "rebuild", 00:32:20.983 "target": "spare", 00:32:20.983 "progress": { 00:32:20.983 "blocks": 3072, 00:32:20.983 "percent": 38 00:32:20.983 } 00:32:20.983 }, 00:32:20.983 "base_bdevs_list": [ 00:32:20.983 { 00:32:20.983 "name": "spare", 00:32:20.983 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:20.983 "is_configured": true, 00:32:20.983 "data_offset": 256, 00:32:20.983 "data_size": 7936 00:32:20.983 }, 00:32:20.983 { 00:32:20.983 "name": "BaseBdev2", 00:32:20.983 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:20.983 "is_configured": true, 00:32:20.983 "data_offset": 256, 00:32:20.983 "data_size": 7936 00:32:20.983 } 00:32:20.983 ] 00:32:20.983 }' 00:32:20.983 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:20.983 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:20.983 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:20.983 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:20.983 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:21.264 [2024-07-23 15:26:16.509759] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:21.264 [2024-07-23 15:26:16.583105] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:21.264 [2024-07-23 15:26:16.583179] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:21.264 [2024-07-23 15:26:16.583201] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:21.264 [2024-07-23 15:26:16.583210] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.264 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.522 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:21.522 "name": "raid_bdev1", 00:32:21.522 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:21.522 "strip_size_kb": 0, 00:32:21.522 "state": "online", 00:32:21.522 "raid_level": "raid1", 00:32:21.522 "superblock": true, 00:32:21.522 "num_base_bdevs": 2, 00:32:21.522 "num_base_bdevs_discovered": 1, 00:32:21.522 "num_base_bdevs_operational": 1, 00:32:21.522 "base_bdevs_list": [ 00:32:21.522 { 00:32:21.522 "name": null, 00:32:21.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.522 "is_configured": false, 00:32:21.522 "data_offset": 256, 00:32:21.522 "data_size": 7936 00:32:21.522 }, 00:32:21.522 { 00:32:21.522 "name": "BaseBdev2", 00:32:21.522 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:21.522 "is_configured": true, 00:32:21.522 "data_offset": 256, 00:32:21.523 "data_size": 7936 00:32:21.523 } 00:32:21.523 ] 00:32:21.523 }' 00:32:21.523 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:21.523 15:26:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:21.781 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:21.781 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:21.781 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:21.781 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:21.781 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:21.781 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.781 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.039 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:22.039 "name": "raid_bdev1", 00:32:22.039 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:22.039 "strip_size_kb": 0, 00:32:22.039 "state": "online", 00:32:22.039 "raid_level": "raid1", 00:32:22.039 "superblock": true, 00:32:22.039 "num_base_bdevs": 2, 00:32:22.039 "num_base_bdevs_discovered": 1, 00:32:22.039 "num_base_bdevs_operational": 1, 00:32:22.039 "base_bdevs_list": [ 00:32:22.039 { 00:32:22.039 "name": null, 00:32:22.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.039 "is_configured": false, 00:32:22.039 "data_offset": 256, 00:32:22.039 "data_size": 7936 00:32:22.039 }, 00:32:22.039 { 00:32:22.039 "name": "BaseBdev2", 00:32:22.039 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:22.039 "is_configured": true, 00:32:22.039 "data_offset": 256, 00:32:22.039 "data_size": 7936 00:32:22.039 } 00:32:22.039 ] 00:32:22.039 }' 00:32:22.039 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:22.039 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:22.039 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:22.039 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:22.039 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:22.297 [2024-07-23 15:26:17.547492] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:22.297 [2024-07-23 15:26:17.550869] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000021f0 00:32:22.297 [2024-07-23 15:26:17.553238] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:22.297 15:26:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:23.230 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:23.230 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:23.230 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:23.230 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:23.230 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:23.230 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.230 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:23.488 "name": "raid_bdev1", 00:32:23.488 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:23.488 "strip_size_kb": 0, 00:32:23.488 "state": "online", 00:32:23.488 "raid_level": "raid1", 00:32:23.488 "superblock": true, 00:32:23.488 "num_base_bdevs": 2, 00:32:23.488 "num_base_bdevs_discovered": 2, 00:32:23.488 "num_base_bdevs_operational": 2, 00:32:23.488 "process": { 00:32:23.488 "type": "rebuild", 00:32:23.488 "target": "spare", 00:32:23.488 "progress": { 00:32:23.488 "blocks": 3072, 00:32:23.488 "percent": 38 00:32:23.488 } 00:32:23.488 }, 00:32:23.488 "base_bdevs_list": [ 00:32:23.488 { 00:32:23.488 "name": "spare", 00:32:23.488 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:23.488 "is_configured": true, 00:32:23.488 "data_offset": 256, 00:32:23.488 "data_size": 7936 00:32:23.488 }, 00:32:23.488 { 00:32:23.488 "name": "BaseBdev2", 00:32:23.488 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:23.488 "is_configured": true, 00:32:23.488 "data_offset": 256, 00:32:23.488 "data_size": 7936 00:32:23.488 } 00:32:23.488 ] 00:32:23.488 }' 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:32:23.488 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1098 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.488 15:26:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.746 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:23.746 "name": "raid_bdev1", 00:32:23.746 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:23.746 "strip_size_kb": 0, 00:32:23.746 "state": "online", 00:32:23.746 "raid_level": "raid1", 00:32:23.746 "superblock": true, 00:32:23.746 "num_base_bdevs": 2, 00:32:23.746 "num_base_bdevs_discovered": 2, 00:32:23.746 "num_base_bdevs_operational": 2, 00:32:23.746 "process": { 00:32:23.746 "type": "rebuild", 00:32:23.746 "target": "spare", 00:32:23.746 "progress": { 00:32:23.746 "blocks": 3584, 00:32:23.746 "percent": 45 00:32:23.746 } 00:32:23.746 }, 00:32:23.746 "base_bdevs_list": [ 00:32:23.746 { 00:32:23.746 "name": "spare", 00:32:23.746 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:23.746 "is_configured": true, 00:32:23.746 "data_offset": 256, 00:32:23.746 "data_size": 7936 00:32:23.746 }, 00:32:23.746 { 00:32:23.746 "name": "BaseBdev2", 00:32:23.746 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:23.746 "is_configured": true, 00:32:23.746 "data_offset": 256, 00:32:23.746 "data_size": 7936 00:32:23.746 } 00:32:23.746 ] 00:32:23.746 }' 00:32:23.746 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:23.746 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:23.746 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:23.746 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:23.746 15:26:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:24.678 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:24.678 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:24.678 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:24.678 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:24.678 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:24.678 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:24.678 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.678 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.936 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:24.936 "name": "raid_bdev1", 00:32:24.936 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:24.936 "strip_size_kb": 0, 00:32:24.936 "state": "online", 00:32:24.936 "raid_level": "raid1", 00:32:24.936 "superblock": true, 00:32:24.936 "num_base_bdevs": 2, 00:32:24.936 "num_base_bdevs_discovered": 2, 00:32:24.936 "num_base_bdevs_operational": 2, 00:32:24.936 "process": { 00:32:24.936 "type": "rebuild", 00:32:24.936 "target": "spare", 00:32:24.936 "progress": { 00:32:24.936 "blocks": 6656, 00:32:24.936 "percent": 83 00:32:24.936 } 00:32:24.936 }, 00:32:24.936 "base_bdevs_list": [ 00:32:24.936 { 00:32:24.936 "name": "spare", 00:32:24.936 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:24.936 "is_configured": true, 00:32:24.936 "data_offset": 256, 00:32:24.936 "data_size": 7936 00:32:24.937 }, 00:32:24.937 { 00:32:24.937 "name": "BaseBdev2", 00:32:24.937 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:24.937 "is_configured": true, 00:32:24.937 "data_offset": 256, 00:32:24.937 "data_size": 7936 00:32:24.937 } 00:32:24.937 ] 00:32:24.937 }' 00:32:24.937 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:24.937 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:24.937 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:24.937 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:24.937 15:26:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:25.503 [2024-07-23 15:26:20.672428] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:25.503 [2024-07-23 15:26:20.672518] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:25.503 [2024-07-23 15:26:20.672670] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:26.070 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:26.070 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:26.070 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:26.070 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:26.070 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:26.070 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:26.070 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.070 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.328 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:26.328 "name": "raid_bdev1", 00:32:26.328 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:26.328 "strip_size_kb": 0, 00:32:26.328 "state": "online", 00:32:26.328 "raid_level": "raid1", 00:32:26.328 "superblock": true, 00:32:26.328 "num_base_bdevs": 2, 00:32:26.328 "num_base_bdevs_discovered": 2, 00:32:26.328 "num_base_bdevs_operational": 2, 00:32:26.328 "base_bdevs_list": [ 00:32:26.328 { 00:32:26.328 "name": "spare", 00:32:26.328 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:26.328 "is_configured": true, 00:32:26.328 "data_offset": 256, 00:32:26.328 "data_size": 7936 00:32:26.328 }, 00:32:26.328 { 00:32:26.328 "name": "BaseBdev2", 00:32:26.328 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:26.328 "is_configured": true, 00:32:26.328 "data_offset": 256, 00:32:26.328 "data_size": 7936 00:32:26.328 } 00:32:26.328 ] 00:32:26.328 }' 00:32:26.328 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:26.328 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:26.328 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:26.328 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:26.328 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:32:26.328 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:26.329 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:26.329 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:26.329 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:26.329 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:26.329 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.329 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.587 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:26.587 "name": "raid_bdev1", 00:32:26.587 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:26.587 "strip_size_kb": 0, 00:32:26.587 "state": "online", 00:32:26.587 "raid_level": "raid1", 00:32:26.587 "superblock": true, 00:32:26.587 "num_base_bdevs": 2, 00:32:26.587 "num_base_bdevs_discovered": 2, 00:32:26.587 "num_base_bdevs_operational": 2, 00:32:26.587 "base_bdevs_list": [ 00:32:26.587 { 00:32:26.587 "name": "spare", 00:32:26.587 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:26.587 "is_configured": true, 00:32:26.587 "data_offset": 256, 00:32:26.587 "data_size": 7936 00:32:26.587 }, 00:32:26.587 { 00:32:26.587 "name": "BaseBdev2", 00:32:26.587 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:26.587 "is_configured": true, 00:32:26.587 "data_offset": 256, 00:32:26.587 "data_size": 7936 00:32:26.587 } 00:32:26.587 ] 00:32:26.587 }' 00:32:26.587 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:26.587 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:26.587 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.588 15:26:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.846 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:26.846 "name": "raid_bdev1", 00:32:26.846 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:26.846 "strip_size_kb": 0, 00:32:26.846 "state": "online", 00:32:26.846 "raid_level": "raid1", 00:32:26.846 "superblock": true, 00:32:26.846 "num_base_bdevs": 2, 00:32:26.846 "num_base_bdevs_discovered": 2, 00:32:26.846 "num_base_bdevs_operational": 2, 00:32:26.846 "base_bdevs_list": [ 00:32:26.846 { 00:32:26.846 "name": "spare", 00:32:26.846 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:26.846 "is_configured": true, 00:32:26.846 "data_offset": 256, 00:32:26.846 "data_size": 7936 00:32:26.846 }, 00:32:26.846 { 00:32:26.846 "name": "BaseBdev2", 00:32:26.846 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:26.846 "is_configured": true, 00:32:26.846 "data_offset": 256, 00:32:26.846 "data_size": 7936 00:32:26.846 } 00:32:26.846 ] 00:32:26.846 }' 00:32:26.846 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:26.847 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:27.105 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:27.364 [2024-07-23 15:26:22.569276] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:27.364 [2024-07-23 15:26:22.569327] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:27.364 [2024-07-23 15:26:22.569418] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:27.364 [2024-07-23 15:26:22.569497] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:27.364 [2024-07-23 15:26:22.569513] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007e80 name raid_bdev1, state offline 00:32:27.364 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.364 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:32:27.364 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:27.364 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:32:27.364 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:27.364 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:27.622 15:26:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:27.882 [2024-07-23 15:26:23.161373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:27.882 [2024-07-23 15:26:23.161470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.882 [2024-07-23 15:26:23.161502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80 00:32:27.882 [2024-07-23 15:26:23.161517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.882 [2024-07-23 15:26:23.163724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.882 [2024-07-23 15:26:23.163774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:27.882 [2024-07-23 15:26:23.163857] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:27.882 [2024-07-23 15:26:23.163944] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:27.882 [2024-07-23 15:26:23.164052] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:27.882 spare 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.882 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.882 [2024-07-23 15:26:23.264155] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009080 00:32:27.882 [2024-07-23 15:26:23.264375] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:27.882 [2024-07-23 15:26:23.264564] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000022c0 00:32:27.882 [2024-07-23 15:26:23.264911] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009080 00:32:27.882 [2024-07-23 15:26:23.264946] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009080 00:32:27.882 [2024-07-23 15:26:23.265015] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:28.139 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:28.139 "name": "raid_bdev1", 00:32:28.139 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:28.139 "strip_size_kb": 0, 00:32:28.139 "state": "online", 00:32:28.139 "raid_level": "raid1", 00:32:28.139 "superblock": true, 00:32:28.139 "num_base_bdevs": 2, 00:32:28.139 "num_base_bdevs_discovered": 2, 00:32:28.139 "num_base_bdevs_operational": 2, 00:32:28.139 "base_bdevs_list": [ 00:32:28.139 { 00:32:28.139 "name": "spare", 00:32:28.139 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:28.139 "is_configured": true, 00:32:28.139 "data_offset": 256, 00:32:28.139 "data_size": 7936 00:32:28.139 }, 00:32:28.139 { 00:32:28.139 "name": "BaseBdev2", 00:32:28.139 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:28.139 "is_configured": true, 00:32:28.139 "data_offset": 256, 00:32:28.139 "data_size": 7936 00:32:28.139 } 00:32:28.139 ] 00:32:28.139 }' 00:32:28.139 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:28.139 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:28.397 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:28.397 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:28.397 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:28.397 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:28.397 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:28.397 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.397 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.655 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:28.655 "name": "raid_bdev1", 00:32:28.655 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:28.655 "strip_size_kb": 0, 00:32:28.655 "state": "online", 00:32:28.655 "raid_level": "raid1", 00:32:28.655 "superblock": true, 00:32:28.655 "num_base_bdevs": 2, 00:32:28.655 "num_base_bdevs_discovered": 2, 00:32:28.655 "num_base_bdevs_operational": 2, 00:32:28.655 "base_bdevs_list": [ 00:32:28.655 { 00:32:28.655 "name": "spare", 00:32:28.655 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:28.655 "is_configured": true, 00:32:28.655 "data_offset": 256, 00:32:28.655 "data_size": 7936 00:32:28.655 }, 00:32:28.655 { 00:32:28.655 "name": "BaseBdev2", 00:32:28.655 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:28.655 "is_configured": true, 00:32:28.655 "data_offset": 256, 00:32:28.655 "data_size": 7936 00:32:28.655 } 00:32:28.655 ] 00:32:28.655 }' 00:32:28.655 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:28.655 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:28.655 15:26:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:28.655 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:28.655 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.655 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:28.913 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:28.913 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:29.171 [2024-07-23 15:26:24.485758] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.171 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.430 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.430 "name": "raid_bdev1", 00:32:29.430 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:29.430 "strip_size_kb": 0, 00:32:29.430 "state": "online", 00:32:29.430 "raid_level": "raid1", 00:32:29.430 "superblock": true, 00:32:29.430 "num_base_bdevs": 2, 00:32:29.430 "num_base_bdevs_discovered": 1, 00:32:29.430 "num_base_bdevs_operational": 1, 00:32:29.430 "base_bdevs_list": [ 00:32:29.430 { 00:32:29.430 "name": null, 00:32:29.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.430 "is_configured": false, 00:32:29.430 "data_offset": 256, 00:32:29.430 "data_size": 7936 00:32:29.430 }, 00:32:29.430 { 00:32:29.430 "name": "BaseBdev2", 00:32:29.430 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:29.430 "is_configured": true, 00:32:29.430 "data_offset": 256, 00:32:29.430 "data_size": 7936 00:32:29.430 } 00:32:29.430 ] 00:32:29.430 }' 00:32:29.430 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.430 15:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:29.699 15:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:29.957 [2024-07-23 15:26:25.161933] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:29.957 [2024-07-23 15:26:25.162341] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:29.957 [2024-07-23 15:26:25.162464] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:29.957 [2024-07-23 15:26:25.162522] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:29.957 [2024-07-23 15:26:25.165501] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002390 00:32:29.957 [2024-07-23 15:26:25.167620] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:29.957 15:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:32:30.891 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:30.891 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:30.891 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:30.891 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:30.891 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:30.891 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.891 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.149 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:31.149 "name": "raid_bdev1", 00:32:31.149 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:31.149 "strip_size_kb": 0, 00:32:31.149 "state": "online", 00:32:31.149 "raid_level": "raid1", 00:32:31.149 "superblock": true, 00:32:31.149 "num_base_bdevs": 2, 00:32:31.149 "num_base_bdevs_discovered": 2, 00:32:31.149 "num_base_bdevs_operational": 2, 00:32:31.149 "process": { 00:32:31.149 "type": "rebuild", 00:32:31.149 "target": "spare", 00:32:31.149 "progress": { 00:32:31.149 "blocks": 2816, 00:32:31.149 "percent": 35 00:32:31.149 } 00:32:31.149 }, 00:32:31.149 "base_bdevs_list": [ 00:32:31.149 { 00:32:31.149 "name": "spare", 00:32:31.149 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:31.149 "is_configured": true, 00:32:31.149 "data_offset": 256, 00:32:31.149 "data_size": 7936 00:32:31.149 }, 00:32:31.149 { 00:32:31.149 "name": "BaseBdev2", 00:32:31.149 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:31.149 "is_configured": true, 00:32:31.149 "data_offset": 256, 00:32:31.149 "data_size": 7936 00:32:31.149 } 00:32:31.149 ] 00:32:31.149 }' 00:32:31.149 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:31.149 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:31.149 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:31.149 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:31.150 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:31.408 [2024-07-23 15:26:26.608632] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:31.408 [2024-07-23 15:26:26.676448] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:31.408 [2024-07-23 15:26:26.676553] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:31.408 [2024-07-23 15:26:26.676576] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:31.408 [2024-07-23 15:26:26.676585] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.408 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.667 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:31.667 "name": "raid_bdev1", 00:32:31.667 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:31.667 "strip_size_kb": 0, 00:32:31.667 "state": "online", 00:32:31.667 "raid_level": "raid1", 00:32:31.667 "superblock": true, 00:32:31.667 "num_base_bdevs": 2, 00:32:31.667 "num_base_bdevs_discovered": 1, 00:32:31.667 "num_base_bdevs_operational": 1, 00:32:31.667 "base_bdevs_list": [ 00:32:31.667 { 00:32:31.667 "name": null, 00:32:31.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.667 "is_configured": false, 00:32:31.667 "data_offset": 256, 00:32:31.667 "data_size": 7936 00:32:31.667 }, 00:32:31.667 { 00:32:31.667 "name": "BaseBdev2", 00:32:31.667 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:31.667 "is_configured": true, 00:32:31.667 "data_offset": 256, 00:32:31.667 "data_size": 7936 00:32:31.667 } 00:32:31.667 ] 00:32:31.667 }' 00:32:31.667 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:31.667 15:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:31.926 15:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:32.185 [2024-07-23 15:26:27.396877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:32.186 [2024-07-23 15:26:27.397111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:32.186 [2024-07-23 15:26:27.397182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009680 00:32:32.186 [2024-07-23 15:26:27.397275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:32.186 [2024-07-23 15:26:27.397497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:32.186 [2024-07-23 15:26:27.397667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:32.186 [2024-07-23 15:26:27.397774] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:32.186 [2024-07-23 15:26:27.398033] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:32.186 [2024-07-23 15:26:27.398109] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:32.186 [2024-07-23 15:26:27.398237] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:32.186 spare 00:32:32.186 [2024-07-23 15:26:27.401230] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000002460 00:32:32.186 [2024-07-23 15:26:27.403428] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:32.186 15:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:32:33.123 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:33.123 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:33.123 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:33.123 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:33.123 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:33.123 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.123 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.381 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:33.382 "name": "raid_bdev1", 00:32:33.382 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:33.382 "strip_size_kb": 0, 00:32:33.382 "state": "online", 00:32:33.382 "raid_level": "raid1", 00:32:33.382 "superblock": true, 00:32:33.382 "num_base_bdevs": 2, 00:32:33.382 "num_base_bdevs_discovered": 2, 00:32:33.382 "num_base_bdevs_operational": 2, 00:32:33.382 "process": { 00:32:33.382 "type": "rebuild", 00:32:33.382 "target": "spare", 00:32:33.382 "progress": { 00:32:33.382 "blocks": 3072, 00:32:33.382 "percent": 38 00:32:33.382 } 00:32:33.382 }, 00:32:33.382 "base_bdevs_list": [ 00:32:33.382 { 00:32:33.382 "name": "spare", 00:32:33.382 "uuid": "2db3306d-d2d0-5692-bbde-c2c4122560fe", 00:32:33.382 "is_configured": true, 00:32:33.382 "data_offset": 256, 00:32:33.382 "data_size": 7936 00:32:33.382 }, 00:32:33.382 { 00:32:33.382 "name": "BaseBdev2", 00:32:33.382 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:33.382 "is_configured": true, 00:32:33.382 "data_offset": 256, 00:32:33.382 "data_size": 7936 00:32:33.382 } 00:32:33.382 ] 00:32:33.382 }' 00:32:33.382 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:33.382 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:33.382 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:33.382 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:33.382 15:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:33.641 [2024-07-23 15:26:28.918571] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:33.641 [2024-07-23 15:26:29.012834] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:33.641 [2024-07-23 15:26:29.012905] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:33.641 [2024-07-23 15:26:29.012922] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:33.641 [2024-07-23 15:26:29.012934] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.641 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.900 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:33.900 "name": "raid_bdev1", 00:32:33.900 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:33.900 "strip_size_kb": 0, 00:32:33.900 "state": "online", 00:32:33.900 "raid_level": "raid1", 00:32:33.900 "superblock": true, 00:32:33.900 "num_base_bdevs": 2, 00:32:33.900 "num_base_bdevs_discovered": 1, 00:32:33.900 "num_base_bdevs_operational": 1, 00:32:33.900 "base_bdevs_list": [ 00:32:33.900 { 00:32:33.900 "name": null, 00:32:33.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.900 "is_configured": false, 00:32:33.900 "data_offset": 256, 00:32:33.900 "data_size": 7936 00:32:33.900 }, 00:32:33.900 { 00:32:33.900 "name": "BaseBdev2", 00:32:33.900 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:33.900 "is_configured": true, 00:32:33.900 "data_offset": 256, 00:32:33.900 "data_size": 7936 00:32:33.900 } 00:32:33.900 ] 00:32:33.900 }' 00:32:33.900 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:33.900 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:34.468 "name": "raid_bdev1", 00:32:34.468 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:34.468 "strip_size_kb": 0, 00:32:34.468 "state": "online", 00:32:34.468 "raid_level": "raid1", 00:32:34.468 "superblock": true, 00:32:34.468 "num_base_bdevs": 2, 00:32:34.468 "num_base_bdevs_discovered": 1, 00:32:34.468 "num_base_bdevs_operational": 1, 00:32:34.468 "base_bdevs_list": [ 00:32:34.468 { 00:32:34.468 "name": null, 00:32:34.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.468 "is_configured": false, 00:32:34.468 "data_offset": 256, 00:32:34.468 "data_size": 7936 00:32:34.468 }, 00:32:34.468 { 00:32:34.468 "name": "BaseBdev2", 00:32:34.468 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:34.468 "is_configured": true, 00:32:34.468 "data_offset": 256, 00:32:34.468 "data_size": 7936 00:32:34.468 } 00:32:34.468 ] 00:32:34.468 }' 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:34.468 15:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:34.726 15:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:34.985 [2024-07-23 15:26:30.161040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:34.985 [2024-07-23 15:26:30.161133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.985 [2024-07-23 15:26:30.161164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:32:34.985 [2024-07-23 15:26:30.161179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.985 [2024-07-23 15:26:30.161349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.985 [2024-07-23 15:26:30.161368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:34.985 [2024-07-23 15:26:30.161422] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:34.985 [2024-07-23 15:26:30.161449] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:34.985 [2024-07-23 15:26:30.161459] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:34.985 BaseBdev1 00:32:34.985 15:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.922 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.181 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:36.181 "name": "raid_bdev1", 00:32:36.181 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:36.181 "strip_size_kb": 0, 00:32:36.181 "state": "online", 00:32:36.181 "raid_level": "raid1", 00:32:36.181 "superblock": true, 00:32:36.181 "num_base_bdevs": 2, 00:32:36.181 "num_base_bdevs_discovered": 1, 00:32:36.181 "num_base_bdevs_operational": 1, 00:32:36.181 "base_bdevs_list": [ 00:32:36.181 { 00:32:36.181 "name": null, 00:32:36.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.181 "is_configured": false, 00:32:36.181 "data_offset": 256, 00:32:36.181 "data_size": 7936 00:32:36.181 }, 00:32:36.181 { 00:32:36.181 "name": "BaseBdev2", 00:32:36.181 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:36.181 "is_configured": true, 00:32:36.181 "data_offset": 256, 00:32:36.181 "data_size": 7936 00:32:36.181 } 00:32:36.181 ] 00:32:36.181 }' 00:32:36.181 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:36.181 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:36.441 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:36.441 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:36.441 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:36.441 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:36.441 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:36.441 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.441 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.700 "name": "raid_bdev1", 00:32:36.700 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:36.700 "strip_size_kb": 0, 00:32:36.700 "state": "online", 00:32:36.700 "raid_level": "raid1", 00:32:36.700 "superblock": true, 00:32:36.700 "num_base_bdevs": 2, 00:32:36.700 "num_base_bdevs_discovered": 1, 00:32:36.700 "num_base_bdevs_operational": 1, 00:32:36.700 "base_bdevs_list": [ 00:32:36.700 { 00:32:36.700 "name": null, 00:32:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.700 "is_configured": false, 00:32:36.700 "data_offset": 256, 00:32:36.700 "data_size": 7936 00:32:36.700 }, 00:32:36.700 { 00:32:36.700 "name": "BaseBdev2", 00:32:36.700 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:36.700 "is_configured": true, 00:32:36.700 "data_offset": 256, 00:32:36.700 "data_size": 7936 00:32:36.700 } 00:32:36.700 ] 00:32:36.700 }' 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:36.700 15:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:36.958 [2024-07-23 15:26:32.225521] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:36.958 [2024-07-23 15:26:32.225889] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:36.958 [2024-07-23 15:26:32.226042] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:36.958 request: 00:32:36.958 { 00:32:36.958 "base_bdev": "BaseBdev1", 00:32:36.958 "raid_bdev": "raid_bdev1", 00:32:36.958 "method": "bdev_raid_add_base_bdev", 00:32:36.958 "req_id": 1 00:32:36.958 } 00:32:36.958 Got JSON-RPC error response 00:32:36.958 response: 00:32:36.958 { 00:32:36.958 "code": -22, 00:32:36.958 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:36.958 } 00:32:36.958 15:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:32:36.958 15:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:36.958 15:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:36.958 15:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:36.958 15:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.903 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.191 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:38.191 "name": "raid_bdev1", 00:32:38.191 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:38.191 "strip_size_kb": 0, 00:32:38.191 "state": "online", 00:32:38.191 "raid_level": "raid1", 00:32:38.191 "superblock": true, 00:32:38.191 "num_base_bdevs": 2, 00:32:38.191 "num_base_bdevs_discovered": 1, 00:32:38.191 "num_base_bdevs_operational": 1, 00:32:38.191 "base_bdevs_list": [ 00:32:38.191 { 00:32:38.191 "name": null, 00:32:38.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.191 "is_configured": false, 00:32:38.191 "data_offset": 256, 00:32:38.191 "data_size": 7936 00:32:38.191 }, 00:32:38.191 { 00:32:38.191 "name": "BaseBdev2", 00:32:38.191 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:38.191 "is_configured": true, 00:32:38.191 "data_offset": 256, 00:32:38.191 "data_size": 7936 00:32:38.191 } 00:32:38.191 ] 00:32:38.191 }' 00:32:38.191 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:38.191 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:38.450 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:38.450 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:38.450 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:38.450 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:38.450 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:38.450 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.450 15:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:38.709 "name": "raid_bdev1", 00:32:38.709 "uuid": "dce2a5b9-0c71-4047-a82e-feb3fd5961c3", 00:32:38.709 "strip_size_kb": 0, 00:32:38.709 "state": "online", 00:32:38.709 "raid_level": "raid1", 00:32:38.709 "superblock": true, 00:32:38.709 "num_base_bdevs": 2, 00:32:38.709 "num_base_bdevs_discovered": 1, 00:32:38.709 "num_base_bdevs_operational": 1, 00:32:38.709 "base_bdevs_list": [ 00:32:38.709 { 00:32:38.709 "name": null, 00:32:38.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.709 "is_configured": false, 00:32:38.709 "data_offset": 256, 00:32:38.709 "data_size": 7936 00:32:38.709 }, 00:32:38.709 { 00:32:38.709 "name": "BaseBdev2", 00:32:38.709 "uuid": "71f4cbed-6ed1-55a9-af6d-394e56636577", 00:32:38.709 "is_configured": true, 00:32:38.709 "data_offset": 256, 00:32:38.709 "data_size": 7936 00:32:38.709 } 00:32:38.709 ] 00:32:38.709 }' 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 124147 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 124147 ']' 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 124147 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124147 00:32:38.709 killing process with pid 124147 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124147' 00:32:38.709 Received shutdown signal, test time was about 60.000000 seconds 00:32:38.709 00:32:38.709 Latency(us) 00:32:38.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.709 =================================================================================================================== 00:32:38.709 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 124147 00:32:38.709 [2024-07-23 15:26:34.074406] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:38.709 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 124147 00:32:38.709 [2024-07-23 15:26:34.074530] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:38.709 [2024-07-23 15:26:34.074579] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:38.709 [2024-07-23 15:26:34.074597] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009080 name raid_bdev1, state offline 00:32:38.709 [2024-07-23 15:26:34.108776] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:38.968 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:32:38.968 ************************************ 00:32:38.968 END TEST raid_rebuild_test_sb_md_interleaved 00:32:38.968 ************************************ 00:32:38.968 00:32:38.968 real 0m23.841s 00:32:38.968 user 0m35.131s 00:32:38.968 sys 0m3.535s 00:32:38.968 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.968 15:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:38.968 15:26:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:38.968 15:26:34 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:32:38.968 15:26:34 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:32:38.968 15:26:34 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 124147 ']' 00:32:38.968 15:26:34 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 124147 00:32:39.227 15:26:34 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:32:39.227 ************************************ 00:32:39.227 END TEST bdev_raid 00:32:39.227 ************************************ 00:32:39.227 00:32:39.227 real 18m4.044s 00:32:39.227 user 28m43.868s 00:32:39.227 sys 3m41.757s 00:32:39.227 15:26:34 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.227 15:26:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:39.227 15:26:34 -- common/autotest_common.sh@1142 -- # return 0 00:32:39.227 15:26:34 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:32:39.227 15:26:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:39.227 15:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.227 15:26:34 -- common/autotest_common.sh@10 -- # set +x 00:32:39.227 ************************************ 00:32:39.227 START TEST bdevperf_config 00:32:39.227 ************************************ 00:32:39.227 15:26:34 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:32:39.227 * Looking for test storage... 00:32:39.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:32:39.227 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:39.227 15:26:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:39.227 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:39.228 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:32:39.228 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:39.228 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:39.228 15:26:34 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:32:42.514 15:26:37 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-23 15:26:34.697475] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:42.514 [2024-07-23 15:26:34.697673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124891 ] 00:32:42.514 Using job config with 4 jobs 00:32:42.514 [2024-07-23 15:26:34.850888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.514 [2024-07-23 15:26:34.914756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.514 cpumask for '\''job0'\'' is too big 00:32:42.514 cpumask for '\''job1'\'' is too big 00:32:42.514 cpumask for '\''job2'\'' is too big 00:32:42.514 cpumask for '\''job3'\'' is too big 00:32:42.514 Running I/O for 2 seconds... 00:32:42.514 00:32:42.514 Latency(us) 00:32:42.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.514 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.514 Malloc0 : 2.02 30077.68 29.37 0.00 0.00 8500.69 2512.21 21720.50 00:32:42.514 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.514 Malloc0 : 2.02 30057.20 29.35 0.00 0.00 8479.61 2496.61 19223.89 00:32:42.514 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.514 Malloc0 : 2.02 30037.20 29.33 0.00 0.00 8460.68 2481.01 16602.45 00:32:42.514 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.514 Malloc0 : 2.02 30112.10 29.41 0.00 0.00 8415.34 717.78 14105.84 00:32:42.514 =================================================================================================================== 00:32:42.514 Total : 120284.18 117.47 0.00 0.00 8464.03 717.78 21720.50' 00:32:42.514 15:26:37 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-23 15:26:34.697475] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:42.514 [2024-07-23 15:26:34.697673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124891 ] 00:32:42.514 Using job config with 4 jobs 00:32:42.514 [2024-07-23 15:26:34.850888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.514 [2024-07-23 15:26:34.914756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.514 cpumask for '\''job0'\'' is too big 00:32:42.514 cpumask for '\''job1'\'' is too big 00:32:42.514 cpumask for '\''job2'\'' is too big 00:32:42.514 cpumask for '\''job3'\'' is too big 00:32:42.514 Running I/O for 2 seconds... 00:32:42.514 00:32:42.514 Latency(us) 00:32:42.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.514 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.514 Malloc0 : 2.02 30077.68 29.37 0.00 0.00 8500.69 2512.21 21720.50 00:32:42.514 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.514 Malloc0 : 2.02 30057.20 29.35 0.00 0.00 8479.61 2496.61 19223.89 00:32:42.514 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.514 Malloc0 : 2.02 30037.20 29.33 0.00 0.00 8460.68 2481.01 16602.45 00:32:42.514 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.514 Malloc0 : 2.02 30112.10 29.41 0.00 0.00 8415.34 717.78 14105.84 00:32:42.514 =================================================================================================================== 00:32:42.514 Total : 120284.18 117.47 0.00 0.00 8464.03 717.78 21720.50' 00:32:42.514 15:26:37 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 15:26:34.697475] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:42.514 [2024-07-23 15:26:34.697673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124891 ] 00:32:42.514 Using job config with 4 jobs 00:32:42.514 [2024-07-23 15:26:34.850888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.514 [2024-07-23 15:26:34.914756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.514 cpumask for '\''job0'\'' is too big 00:32:42.514 cpumask for '\''job1'\'' is too big 00:32:42.514 cpumask for '\''job2'\'' is too big 00:32:42.514 cpumask for '\''job3'\'' is too big 00:32:42.514 Running I/O for 2 seconds... 00:32:42.514 00:32:42.515 Latency(us) 00:32:42.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.515 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.515 Malloc0 : 2.02 30077.68 29.37 0.00 0.00 8500.69 2512.21 21720.50 00:32:42.515 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.515 Malloc0 : 2.02 30057.20 29.35 0.00 0.00 8479.61 2496.61 19223.89 00:32:42.515 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.515 Malloc0 : 2.02 30037.20 29.33 0.00 0.00 8460.68 2481.01 16602.45 00:32:42.515 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:42.515 Malloc0 : 2.02 30112.10 29.41 0.00 0.00 8415.34 717.78 14105.84 00:32:42.515 =================================================================================================================== 00:32:42.515 Total : 120284.18 117.47 0.00 0.00 8464.03 717.78 21720.50' 00:32:42.515 15:26:37 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:32:42.515 15:26:37 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:32:42.515 15:26:37 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:32:42.515 15:26:37 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:32:42.515 [2024-07-23 15:26:37.435589] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:42.515 [2024-07-23 15:26:37.435739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124927 ] 00:32:42.515 [2024-07-23 15:26:37.575031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.515 [2024-07-23 15:26:37.630564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.515 cpumask for 'job0' is too big 00:32:42.515 cpumask for 'job1' is too big 00:32:42.515 cpumask for 'job2' is too big 00:32:42.515 cpumask for 'job3' is too big 00:32:45.049 15:26:40 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:32:45.049 Running I/O for 2 seconds... 00:32:45.049 00:32:45.049 Latency(us) 00:32:45.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.049 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:45.049 Malloc0 : 2.02 31230.95 30.50 0.00 0.00 8189.98 1552.58 13668.94 00:32:45.049 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:45.049 Malloc0 : 2.02 31209.80 30.48 0.00 0.00 8178.38 1536.98 11983.73 00:32:45.049 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:45.049 Malloc0 : 2.02 31188.79 30.46 0.00 0.00 8168.79 1521.37 10173.68 00:32:45.049 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:32:45.050 Malloc0 : 2.02 31168.03 30.44 0.00 0.00 8159.86 1536.98 8862.96 00:32:45.050 =================================================================================================================== 00:32:45.050 Total : 124797.57 121.87 0.00 0.00 8174.25 1521.37 13668.94' 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:32:45.050 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:45.050 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:32:45.050 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:45.050 15:26:40 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:32:47.581 15:26:42 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-23 15:26:40.133572] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:47.581 [2024-07-23 15:26:40.133721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124961 ] 00:32:47.581 Using job config with 3 jobs 00:32:47.581 [2024-07-23 15:26:40.272631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.581 [2024-07-23 15:26:40.328725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.581 cpumask for '\''job0'\'' is too big 00:32:47.581 cpumask for '\''job1'\'' is too big 00:32:47.581 cpumask for '\''job2'\'' is too big 00:32:47.581 Running I/O for 2 seconds... 00:32:47.581 00:32:47.581 Latency(us) 00:32:47.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.581 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.581 Malloc0 : 2.01 42302.05 41.31 0.00 0.00 6045.21 1575.98 9549.53 00:32:47.581 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.581 Malloc0 : 2.01 42273.83 41.28 0.00 0.00 6038.13 1497.97 7926.74 00:32:47.581 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.581 Malloc0 : 2.01 42329.60 41.34 0.00 0.00 6019.02 702.17 6553.60 00:32:47.581 =================================================================================================================== 00:32:47.581 Total : 126905.47 123.93 0.00 0.00 6034.10 702.17 9549.53' 00:32:47.581 15:26:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-23 15:26:40.133572] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:47.581 [2024-07-23 15:26:40.133721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124961 ] 00:32:47.581 Using job config with 3 jobs 00:32:47.581 [2024-07-23 15:26:40.272631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.581 [2024-07-23 15:26:40.328725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.581 cpumask for '\''job0'\'' is too big 00:32:47.581 cpumask for '\''job1'\'' is too big 00:32:47.581 cpumask for '\''job2'\'' is too big 00:32:47.581 Running I/O for 2 seconds... 00:32:47.581 00:32:47.581 Latency(us) 00:32:47.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.581 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.582 Malloc0 : 2.01 42302.05 41.31 0.00 0.00 6045.21 1575.98 9549.53 00:32:47.582 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.582 Malloc0 : 2.01 42273.83 41.28 0.00 0.00 6038.13 1497.97 7926.74 00:32:47.582 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.582 Malloc0 : 2.01 42329.60 41.34 0.00 0.00 6019.02 702.17 6553.60 00:32:47.582 =================================================================================================================== 00:32:47.582 Total : 126905.47 123.93 0.00 0.00 6034.10 702.17 9549.53' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 15:26:40.133572] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:47.582 [2024-07-23 15:26:40.133721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124961 ] 00:32:47.582 Using job config with 3 jobs 00:32:47.582 [2024-07-23 15:26:40.272631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.582 [2024-07-23 15:26:40.328725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.582 cpumask for '\''job0'\'' is too big 00:32:47.582 cpumask for '\''job1'\'' is too big 00:32:47.582 cpumask for '\''job2'\'' is too big 00:32:47.582 Running I/O for 2 seconds... 00:32:47.582 00:32:47.582 Latency(us) 00:32:47.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.582 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.582 Malloc0 : 2.01 42302.05 41.31 0.00 0.00 6045.21 1575.98 9549.53 00:32:47.582 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.582 Malloc0 : 2.01 42273.83 41.28 0.00 0.00 6038.13 1497.97 7926.74 00:32:47.582 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:32:47.582 Malloc0 : 2.01 42329.60 41.34 0.00 0.00 6019.02 702.17 6553.60 00:32:47.582 =================================================================================================================== 00:32:47.582 Total : 126905.47 123.93 0.00 0.00 6034.10 702.17 9549.53' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:32:47.582 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:47.582 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:47.582 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:47.582 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:47.582 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:32:47.582 15:26:42 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:32:50.125 15:26:45 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-23 15:26:42.833672] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:50.125 [2024-07-23 15:26:42.833823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125004 ] 00:32:50.125 Using job config with 4 jobs 00:32:50.125 [2024-07-23 15:26:42.971905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.125 [2024-07-23 15:26:43.027771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.125 cpumask for '\''job0'\'' is too big 00:32:50.125 cpumask for '\''job1'\'' is too big 00:32:50.125 cpumask for '\''job2'\'' is too big 00:32:50.125 cpumask for '\''job3'\'' is too big 00:32:50.125 Running I/O for 2 seconds... 00:32:50.125 00:32:50.125 Latency(us) 00:32:50.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.02 15480.12 15.12 0.00 0.00 16524.48 3354.82 27337.87 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.03 15483.99 15.12 0.00 0.00 16503.69 3963.37 27213.04 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.03 15473.76 15.11 0.00 0.00 16464.71 3292.40 23717.79 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.04 15463.08 15.10 0.00 0.00 16463.17 3854.14 23592.96 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.04 15452.77 15.09 0.00 0.00 16426.97 3292.40 20222.54 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.04 15442.13 15.08 0.00 0.00 16423.04 3807.33 20097.71 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.04 15431.92 15.07 0.00 0.00 16390.85 3167.57 17351.44 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.04 15421.23 15.06 0.00 0.00 16384.03 3932.16 17351.44 00:32:50.125 =================================================================================================================== 00:32:50.125 Total : 123649.01 120.75 0.00 0.00 16447.54 3167.57 27337.87' 00:32:50.125 15:26:45 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-23 15:26:42.833672] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:50.125 [2024-07-23 15:26:42.833823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125004 ] 00:32:50.125 Using job config with 4 jobs 00:32:50.125 [2024-07-23 15:26:42.971905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.125 [2024-07-23 15:26:43.027771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.125 cpumask for '\''job0'\'' is too big 00:32:50.125 cpumask for '\''job1'\'' is too big 00:32:50.125 cpumask for '\''job2'\'' is too big 00:32:50.125 cpumask for '\''job3'\'' is too big 00:32:50.125 Running I/O for 2 seconds... 00:32:50.125 00:32:50.125 Latency(us) 00:32:50.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.02 15480.12 15.12 0.00 0.00 16524.48 3354.82 27337.87 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.03 15483.99 15.12 0.00 0.00 16503.69 3963.37 27213.04 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.03 15473.76 15.11 0.00 0.00 16464.71 3292.40 23717.79 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.04 15463.08 15.10 0.00 0.00 16463.17 3854.14 23592.96 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.04 15452.77 15.09 0.00 0.00 16426.97 3292.40 20222.54 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.04 15442.13 15.08 0.00 0.00 16423.04 3807.33 20097.71 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.04 15431.92 15.07 0.00 0.00 16390.85 3167.57 17351.44 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.04 15421.23 15.06 0.00 0.00 16384.03 3932.16 17351.44 00:32:50.125 =================================================================================================================== 00:32:50.125 Total : 123649.01 120.75 0.00 0.00 16447.54 3167.57 27337.87' 00:32:50.125 15:26:45 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 15:26:42.833672] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:50.125 [2024-07-23 15:26:42.833823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125004 ] 00:32:50.125 Using job config with 4 jobs 00:32:50.125 [2024-07-23 15:26:42.971905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.125 [2024-07-23 15:26:43.027771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.125 cpumask for '\''job0'\'' is too big 00:32:50.125 cpumask for '\''job1'\'' is too big 00:32:50.125 cpumask for '\''job2'\'' is too big 00:32:50.125 cpumask for '\''job3'\'' is too big 00:32:50.125 Running I/O for 2 seconds... 00:32:50.125 00:32:50.125 Latency(us) 00:32:50.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.02 15480.12 15.12 0.00 0.00 16524.48 3354.82 27337.87 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.03 15483.99 15.12 0.00 0.00 16503.69 3963.37 27213.04 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.03 15473.76 15.11 0.00 0.00 16464.71 3292.40 23717.79 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc1 : 2.04 15463.08 15.10 0.00 0.00 16463.17 3854.14 23592.96 00:32:50.125 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.125 Malloc0 : 2.04 15452.77 15.09 0.00 0.00 16426.97 3292.40 20222.54 00:32:50.125 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.126 Malloc1 : 2.04 15442.13 15.08 0.00 0.00 16423.04 3807.33 20097.71 00:32:50.126 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.126 Malloc0 : 2.04 15431.92 15.07 0.00 0.00 16390.85 3167.57 17351.44 00:32:50.126 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:32:50.126 Malloc1 : 2.04 15421.23 15.06 0.00 0.00 16384.03 3932.16 17351.44 00:32:50.126 =================================================================================================================== 00:32:50.126 Total : 123649.01 120.75 0.00 0.00 16447.54 3167.57 27337.87' 00:32:50.126 15:26:45 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:32:50.126 15:26:45 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:32:50.126 15:26:45 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:32:50.126 15:26:45 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:32:50.126 15:26:45 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:32:50.126 15:26:45 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:32:50.126 ************************************ 00:32:50.126 END TEST bdevperf_config 00:32:50.126 ************************************ 00:32:50.126 00:32:50.126 real 0m10.998s 00:32:50.126 user 0m9.467s 00:32:50.126 sys 0m1.051s 00:32:50.126 15:26:45 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:50.126 15:26:45 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 15:26:45 -- common/autotest_common.sh@1142 -- # return 0 00:32:50.126 15:26:45 -- spdk/autotest.sh@192 -- # uname -s 00:32:50.126 15:26:45 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:32:50.126 15:26:45 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:32:50.126 15:26:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:50.126 15:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:50.126 15:26:45 -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 ************************************ 00:32:50.126 START TEST reactor_set_interrupt 00:32:50.126 ************************************ 00:32:50.126 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:32:50.386 * Looking for test storage... 00:32:50.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:32:50.387 15:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:32:50.387 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:32:50.387 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:32:50.387 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:32:50.387 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:32:50.387 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:50.387 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:32:50.387 15:26:45 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:32:50.387 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:32:50.387 15:26:45 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:32:50.388 15:26:45 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:32:50.388 15:26:45 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:32:50.388 #define SPDK_CONFIG_H 00:32:50.388 #define SPDK_CONFIG_APPS 1 00:32:50.388 #define SPDK_CONFIG_ARCH native 00:32:50.388 #define SPDK_CONFIG_ASAN 1 00:32:50.388 #undef SPDK_CONFIG_AVAHI 00:32:50.388 #undef SPDK_CONFIG_CET 00:32:50.388 #define SPDK_CONFIG_COVERAGE 1 00:32:50.388 #define SPDK_CONFIG_CROSS_PREFIX 00:32:50.388 #undef SPDK_CONFIG_CRYPTO 00:32:50.388 #undef SPDK_CONFIG_CRYPTO_MLX5 00:32:50.388 #undef SPDK_CONFIG_CUSTOMOCF 00:32:50.388 #undef SPDK_CONFIG_DAOS 00:32:50.388 #define SPDK_CONFIG_DAOS_DIR 00:32:50.388 #define SPDK_CONFIG_DEBUG 1 00:32:50.388 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:32:50.388 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:32:50.388 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:32:50.388 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:32:50.388 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:32:50.388 #undef SPDK_CONFIG_DPDK_UADK 00:32:50.388 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:32:50.388 #define SPDK_CONFIG_EXAMPLES 1 00:32:50.388 #undef SPDK_CONFIG_FC 00:32:50.388 #define SPDK_CONFIG_FC_PATH 00:32:50.388 #define SPDK_CONFIG_FIO_PLUGIN 1 00:32:50.388 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:32:50.388 #undef SPDK_CONFIG_FUSE 00:32:50.388 #undef SPDK_CONFIG_FUZZER 00:32:50.388 #define SPDK_CONFIG_FUZZER_LIB 00:32:50.388 #undef SPDK_CONFIG_GOLANG 00:32:50.388 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:32:50.388 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:32:50.388 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:32:50.388 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:32:50.388 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:32:50.388 #undef SPDK_CONFIG_HAVE_LIBBSD 00:32:50.388 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:32:50.388 #define SPDK_CONFIG_IDXD 1 00:32:50.388 #define SPDK_CONFIG_IDXD_KERNEL 1 00:32:50.388 #undef SPDK_CONFIG_IPSEC_MB 00:32:50.388 #define SPDK_CONFIG_IPSEC_MB_DIR 00:32:50.388 #define SPDK_CONFIG_ISAL 1 00:32:50.388 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:32:50.388 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:32:50.388 #define SPDK_CONFIG_LIBDIR 00:32:50.388 #undef SPDK_CONFIG_LTO 00:32:50.388 #define SPDK_CONFIG_MAX_LCORES 128 00:32:50.388 #define SPDK_CONFIG_NVME_CUSE 1 00:32:50.388 #undef SPDK_CONFIG_OCF 00:32:50.388 #define SPDK_CONFIG_OCF_PATH 00:32:50.388 #define SPDK_CONFIG_OPENSSL_PATH 00:32:50.388 #undef SPDK_CONFIG_PGO_CAPTURE 00:32:50.388 #define SPDK_CONFIG_PGO_DIR 00:32:50.388 #undef SPDK_CONFIG_PGO_USE 00:32:50.388 #define SPDK_CONFIG_PREFIX /usr/local 00:32:50.388 #define SPDK_CONFIG_RAID5F 1 00:32:50.388 #undef SPDK_CONFIG_RBD 00:32:50.388 #define SPDK_CONFIG_RDMA 1 00:32:50.388 #define SPDK_CONFIG_RDMA_PROV verbs 00:32:50.388 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:32:50.388 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:32:50.388 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:32:50.388 #undef SPDK_CONFIG_SHARED 00:32:50.388 #undef SPDK_CONFIG_SMA 00:32:50.388 #define SPDK_CONFIG_TESTS 1 00:32:50.388 #undef SPDK_CONFIG_TSAN 00:32:50.388 #define SPDK_CONFIG_UBLK 1 00:32:50.388 #define SPDK_CONFIG_UBSAN 1 00:32:50.388 #define SPDK_CONFIG_UNIT_TESTS 1 00:32:50.388 #undef SPDK_CONFIG_URING 00:32:50.388 #define SPDK_CONFIG_URING_PATH 00:32:50.388 #undef SPDK_CONFIG_URING_ZNS 00:32:50.388 #undef SPDK_CONFIG_USDT 00:32:50.388 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:32:50.388 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:32:50.388 #undef SPDK_CONFIG_VFIO_USER 00:32:50.388 #define SPDK_CONFIG_VFIO_USER_DIR 00:32:50.388 #define SPDK_CONFIG_VHOST 1 00:32:50.388 #define SPDK_CONFIG_VIRTIO 1 00:32:50.388 #undef SPDK_CONFIG_VTUNE 00:32:50.388 #define SPDK_CONFIG_VTUNE_DIR 00:32:50.388 #define SPDK_CONFIG_WERROR 1 00:32:50.388 #define SPDK_CONFIG_WPDK_DIR 00:32:50.388 #undef SPDK_CONFIG_XNVME 00:32:50.388 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:32:50.388 15:26:45 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:32:50.388 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:50.388 15:26:45 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.388 15:26:45 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.388 15:26:45 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.388 15:26:45 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:50.388 15:26:45 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:50.388 15:26:45 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:50.388 15:26:45 reactor_set_interrupt -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:50.388 15:26:45 reactor_set_interrupt -- paths/export.sh@6 -- # export PATH 00:32:50.388 15:26:45 reactor_set_interrupt -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:50.388 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:32:50.388 15:26:45 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:32:50.388 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:32:50.388 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : v22.11.4 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:32:50.389 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 125074 ]] 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 125074 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.yE04OB 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.yE04OB/tests/interrupt /tmp/spdk.yE04OB 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1249312768 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254027264 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4714496 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=9133084672 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=19681529856 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10531667968 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6266744832 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6270119936 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda16 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=777306112 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=923156480 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=81207296 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=103000064 00:32:50.390 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=109395968 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6395904 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1254010880 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254023168 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=12288 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=94876262400 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4826517504 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:32:50.391 * Looking for test storage... 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=9133084672 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=12746260480 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:32:50.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=125115 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:50.391 15:26:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 125115 /var/tmp/spdk.sock 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 125115 ']' 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:50.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:50.391 15:26:45 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:50.650 [2024-07-23 15:26:45.866266] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:50.650 [2024-07-23 15:26:45.866490] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125115 ] 00:32:50.650 [2024-07-23 15:26:46.019452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:50.650 [2024-07-23 15:26:46.067895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.650 [2024-07-23 15:26:46.067959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.650 [2024-07-23 15:26:46.068078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.908 [2024-07-23 15:26:46.132116] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:51.476 15:26:46 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:51.476 15:26:46 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:32:51.476 15:26:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:32:51.476 15:26:46 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.735 Malloc0 00:32:51.735 Malloc1 00:32:51.735 Malloc2 00:32:51.735 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:32:51.735 15:26:47 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:32:51.735 15:26:47 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:51.735 15:26:47 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:32:51.735 5000+0 records in 00:32:51.735 5000+0 records out 00:32:51.735 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0305181 s, 336 MB/s 00:32:51.735 15:26:47 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:32:51.994 AIO0 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 125115 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 125115 without_thd 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=125115 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:32:51.994 15:26:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:32:52.252 15:26:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:32:52.511 spdk_thread ids are 1 on reactor0. 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 125115 0 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125115 0 idle 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125115 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125115 -w 256 00:32:52.511 15:26:47 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:32:52.769 15:26:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125115 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.28 reactor_0' 00:32:52.769 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125115 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.28 reactor_0 00:32:52.769 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 125115 1 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125115 1 idle 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125115 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125115 -w 256 00:32:52.770 15:26:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125118 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.00 reactor_1' 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125118 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.00 reactor_1 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 125115 2 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125115 2 idle 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125115 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:32:53.028 15:26:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125115 -w 256 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125119 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.00 reactor_2' 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125119 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.00 reactor_2 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:32:53.287 15:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:32:53.545 [2024-07-23 15:26:48.781174] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:53.545 15:26:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:32:53.803 [2024-07-23 15:26:49.021172] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:32:53.803 [2024-07-23 15:26:49.022549] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:32:53.803 [2024-07-23 15:26:49.193062] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:32:53.803 [2024-07-23 15:26:49.193903] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 125115 0 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 125115 0 busy 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125115 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125115 -w 256 00:32:53.803 15:26:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125115 root 20 0 20.1t 71808 29696 R 99.9 0.6 0:00.69 reactor_0' 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125115 root 20 0 20.1t 71808 29696 R 99.9 0.6 0:00.69 reactor_0 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 125115 2 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 125115 2 busy 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125115 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:32:54.062 15:26:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:54.063 15:26:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:32:54.063 15:26:49 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:54.063 15:26:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:54.063 15:26:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:54.063 15:26:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125115 -w 256 00:32:54.063 15:26:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125119 root 20 0 20.1t 71808 29696 R 99.9 0.6 0:00.46 reactor_2' 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125119 root 20 0 20.1t 71808 29696 R 99.9 0.6 0:00.46 reactor_2 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:54.322 15:26:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:32:54.580 [2024-07-23 15:26:49.829080] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:32:54.580 [2024-07-23 15:26:49.829861] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 125115 2 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125115 2 idle 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125115 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125115 -w 256 00:32:54.580 15:26:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125119 root 20 0 20.1t 71808 29696 S 0.0 0.6 0:00.63 reactor_2' 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125119 root 20 0 20.1t 71808 29696 S 0.0 0.6 0:00.63 reactor_2 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:32:54.839 [2024-07-23 15:26:50.233076] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:32:54.839 [2024-07-23 15:26:50.234097] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:32:54.839 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:32:55.097 [2024-07-23 15:26:50.469646] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 125115 0 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125115 0 idle 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125115 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125115 -w 256 00:32:55.097 15:26:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125115 root 20 0 20.1t 71936 29696 S 0.0 0.6 0:01.49 reactor_0' 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125115 root 20 0 20.1t 71936 29696 S 0.0 0.6 0:01.49 reactor_0 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:32:55.356 15:26:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 125115 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 125115 ']' 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 125115 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125115 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:55.356 killing process with pid 125115 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125115' 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 125115 00:32:55.356 15:26:50 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 125115 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=125247 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:32:55.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:55.615 15:26:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 125247 /var/tmp/spdk.sock 00:32:55.615 15:26:51 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 125247 ']' 00:32:55.615 15:26:51 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.615 15:26:51 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:55.615 15:26:51 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.615 15:26:51 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:55.615 15:26:51 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:55.874 [2024-07-23 15:26:51.091672] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:32:55.874 [2024-07-23 15:26:51.091926] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125247 ] 00:32:55.874 [2024-07-23 15:26:51.244848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:55.874 [2024-07-23 15:26:51.292573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.874 [2024-07-23 15:26:51.292587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.874 [2024-07-23 15:26:51.292671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:56.132 [2024-07-23 15:26:51.357238] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:56.700 15:26:52 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:56.700 15:26:52 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:32:56.700 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:32:56.700 15:26:52 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:56.959 Malloc0 00:32:56.959 Malloc1 00:32:56.959 Malloc2 00:32:56.959 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:32:56.959 15:26:52 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:32:56.959 15:26:52 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:56.959 15:26:52 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:32:56.959 5000+0 records in 00:32:56.959 5000+0 records out 00:32:56.959 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0215288 s, 476 MB/s 00:32:56.959 15:26:52 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:32:57.251 AIO0 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 125247 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 125247 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=125247 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:32:57.251 15:26:52 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:32:57.510 15:26:52 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:32:57.769 spdk_thread ids are 1 on reactor0. 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 125247 0 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125247 0 idle 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125247 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125247 -w 256 00:32:57.769 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:32:58.027 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125247 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.28 reactor_0' 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125247 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.28 reactor_0 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 125247 1 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125247 1 idle 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125247 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:32:58.028 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125247 -w 256 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125250 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.00 reactor_1' 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125250 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.00 reactor_1 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 125247 2 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125247 2 idle 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125247 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125247 -w 256 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125251 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.00 reactor_2' 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125251 root 20 0 20.1t 65920 29696 S 0.0 0.5 0:00.00 reactor_2 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:58.286 15:26:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:58.545 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:32:58.545 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:32:58.545 [2024-07-23 15:26:53.966016] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:32:58.545 [2024-07-23 15:26:53.966694] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:32:58.545 [2024-07-23 15:26:53.967929] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:32:58.804 15:26:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:32:58.804 [2024-07-23 15:26:54.209943] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:32:58.804 [2024-07-23 15:26:54.210653] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 125247 0 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 125247 0 busy 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125247 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:32:58.804 15:26:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125247 -w 256 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125247 root 20 0 20.1t 71680 29696 R 99.9 0.6 0:00.77 reactor_0' 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125247 root 20 0 20.1t 71680 29696 R 99.9 0.6 0:00.77 reactor_0 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 125247 2 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 125247 2 busy 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125247 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125247 -w 256 00:32:59.063 15:26:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125251 root 20 0 20.1t 71680 29696 R 99.9 0.6 0:00.46 reactor_2' 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125251 root 20 0 20.1t 71680 29696 R 99.9 0.6 0:00.46 reactor_2 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:59.321 15:26:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:32:59.580 [2024-07-23 15:26:54.938123] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:32:59.580 [2024-07-23 15:26:54.938495] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 125247 2 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125247 2 idle 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125247 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125247 -w 256 00:32:59.580 15:26:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125251 root 20 0 20.1t 71680 29696 S 0.0 0.6 0:00.72 reactor_2' 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125251 root 20 0 20.1t 71680 29696 S 0.0 0.6 0:00.72 reactor_2 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:32:59.839 15:26:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:33:00.098 [2024-07-23 15:26:55.430192] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:33:00.098 [2024-07-23 15:26:55.431755] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:33:00.098 [2024-07-23 15:26:55.431955] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 125247 0 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 125247 0 idle 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=125247 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 125247 -w 256 00:33:00.098 15:26:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 125247 root 20 0 20.1t 71680 29696 S 0.0 0.6 0:01.76 reactor_0' 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 125247 root 20 0 20.1t 71680 29696 S 0.0 0.6 0:01.76 reactor_0 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:33:00.357 15:26:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 125247 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 125247 ']' 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 125247 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125247 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:00.357 killing process with pid 125247 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125247' 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 125247 00:33:00.357 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 125247 00:33:00.617 15:26:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:33:00.617 15:26:55 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:33:00.617 ************************************ 00:33:00.617 END TEST reactor_set_interrupt 00:33:00.617 ************************************ 00:33:00.617 00:33:00.617 real 0m10.433s 00:33:00.617 user 0m9.361s 00:33:00.617 sys 0m1.970s 00:33:00.617 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:00.617 15:26:55 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:00.617 15:26:56 -- common/autotest_common.sh@1142 -- # return 0 00:33:00.617 15:26:56 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:33:00.617 15:26:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:00.617 15:26:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:00.617 15:26:56 -- common/autotest_common.sh@10 -- # set +x 00:33:00.878 ************************************ 00:33:00.878 START TEST reap_unregistered_poller 00:33:00.878 ************************************ 00:33:00.878 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:33:00.878 * Looking for test storage... 00:33:00.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:33:00.878 15:26:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:33:00.878 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:33:00.878 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:33:00.878 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:33:00.878 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:33:00.878 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:00.878 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:33:00.878 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:33:00.878 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:33:00.878 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:33:00.878 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:33:00.878 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:33:00.878 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:33:00.879 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:33:00.879 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:33:00.879 15:26:56 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:33:00.879 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:33:00.879 15:26:56 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:33:00.879 #define SPDK_CONFIG_H 00:33:00.879 #define SPDK_CONFIG_APPS 1 00:33:00.879 #define SPDK_CONFIG_ARCH native 00:33:00.879 #define SPDK_CONFIG_ASAN 1 00:33:00.879 #undef SPDK_CONFIG_AVAHI 00:33:00.879 #undef SPDK_CONFIG_CET 00:33:00.879 #define SPDK_CONFIG_COVERAGE 1 00:33:00.879 #define SPDK_CONFIG_CROSS_PREFIX 00:33:00.879 #undef SPDK_CONFIG_CRYPTO 00:33:00.879 #undef SPDK_CONFIG_CRYPTO_MLX5 00:33:00.879 #undef SPDK_CONFIG_CUSTOMOCF 00:33:00.879 #undef SPDK_CONFIG_DAOS 00:33:00.879 #define SPDK_CONFIG_DAOS_DIR 00:33:00.879 #define SPDK_CONFIG_DEBUG 1 00:33:00.879 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:33:00.879 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:33:00.879 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:33:00.879 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:33:00.879 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:33:00.879 #undef SPDK_CONFIG_DPDK_UADK 00:33:00.879 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:00.879 #define SPDK_CONFIG_EXAMPLES 1 00:33:00.879 #undef SPDK_CONFIG_FC 00:33:00.879 #define SPDK_CONFIG_FC_PATH 00:33:00.879 #define SPDK_CONFIG_FIO_PLUGIN 1 00:33:00.879 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:33:00.879 #undef SPDK_CONFIG_FUSE 00:33:00.880 #undef SPDK_CONFIG_FUZZER 00:33:00.880 #define SPDK_CONFIG_FUZZER_LIB 00:33:00.880 #undef SPDK_CONFIG_GOLANG 00:33:00.880 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:33:00.880 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:33:00.880 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:33:00.880 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:33:00.880 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:33:00.880 #undef SPDK_CONFIG_HAVE_LIBBSD 00:33:00.880 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:33:00.880 #define SPDK_CONFIG_IDXD 1 00:33:00.880 #define SPDK_CONFIG_IDXD_KERNEL 1 00:33:00.880 #undef SPDK_CONFIG_IPSEC_MB 00:33:00.880 #define SPDK_CONFIG_IPSEC_MB_DIR 00:33:00.880 #define SPDK_CONFIG_ISAL 1 00:33:00.880 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:33:00.880 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:33:00.880 #define SPDK_CONFIG_LIBDIR 00:33:00.880 #undef SPDK_CONFIG_LTO 00:33:00.880 #define SPDK_CONFIG_MAX_LCORES 128 00:33:00.880 #define SPDK_CONFIG_NVME_CUSE 1 00:33:00.880 #undef SPDK_CONFIG_OCF 00:33:00.880 #define SPDK_CONFIG_OCF_PATH 00:33:00.880 #define SPDK_CONFIG_OPENSSL_PATH 00:33:00.880 #undef SPDK_CONFIG_PGO_CAPTURE 00:33:00.880 #define SPDK_CONFIG_PGO_DIR 00:33:00.880 #undef SPDK_CONFIG_PGO_USE 00:33:00.880 #define SPDK_CONFIG_PREFIX /usr/local 00:33:00.880 #define SPDK_CONFIG_RAID5F 1 00:33:00.880 #undef SPDK_CONFIG_RBD 00:33:00.880 #define SPDK_CONFIG_RDMA 1 00:33:00.880 #define SPDK_CONFIG_RDMA_PROV verbs 00:33:00.880 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:33:00.880 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:33:00.880 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:33:00.880 #undef SPDK_CONFIG_SHARED 00:33:00.880 #undef SPDK_CONFIG_SMA 00:33:00.880 #define SPDK_CONFIG_TESTS 1 00:33:00.880 #undef SPDK_CONFIG_TSAN 00:33:00.880 #define SPDK_CONFIG_UBLK 1 00:33:00.880 #define SPDK_CONFIG_UBSAN 1 00:33:00.880 #define SPDK_CONFIG_UNIT_TESTS 1 00:33:00.880 #undef SPDK_CONFIG_URING 00:33:00.880 #define SPDK_CONFIG_URING_PATH 00:33:00.880 #undef SPDK_CONFIG_URING_ZNS 00:33:00.880 #undef SPDK_CONFIG_USDT 00:33:00.880 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:33:00.880 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:33:00.880 #undef SPDK_CONFIG_VFIO_USER 00:33:00.880 #define SPDK_CONFIG_VFIO_USER_DIR 00:33:00.880 #define SPDK_CONFIG_VHOST 1 00:33:00.880 #define SPDK_CONFIG_VIRTIO 1 00:33:00.880 #undef SPDK_CONFIG_VTUNE 00:33:00.880 #define SPDK_CONFIG_VTUNE_DIR 00:33:00.880 #define SPDK_CONFIG_WERROR 1 00:33:00.880 #define SPDK_CONFIG_WPDK_DIR 00:33:00.880 #undef SPDK_CONFIG_XNVME 00:33:00.880 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:33:00.880 15:26:56 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:00.880 15:26:56 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.880 15:26:56 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.880 15:26:56 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.880 15:26:56 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:00.880 15:26:56 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:00.880 15:26:56 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:00.880 15:26:56 reap_unregistered_poller -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:00.880 15:26:56 reap_unregistered_poller -- paths/export.sh@6 -- # export PATH 00:33:00.880 15:26:56 reap_unregistered_poller -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:33:00.880 15:26:56 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:33:00.880 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : v22.11.4 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:33:00.881 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 125408 ]] 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 125408 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.NRQKIK 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.NRQKIK/tests/interrupt /tmp/spdk.NRQKIK 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1249312768 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254027264 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4714496 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=9133043712 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=19681529856 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10531708928 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6266744832 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6270119936 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:33:00.882 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda16 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=777306112 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=923156480 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=81207296 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=103000064 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=109395968 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6395904 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1254010880 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254023168 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=12288 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=94876086272 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4826693632 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:33:01.142 * Looking for test storage... 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=9133043712 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=12746301440 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:33:01.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:33:01.142 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:33:01.142 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=125449 00:33:01.143 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:01.143 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:33:01.143 15:26:56 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 125449 /var/tmp/spdk.sock 00:33:01.143 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 125449 ']' 00:33:01.143 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.143 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:01.143 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.143 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:01.143 15:26:56 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:33:01.143 [2024-07-23 15:26:56.389932] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:01.143 [2024-07-23 15:26:56.390140] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125449 ] 00:33:01.143 [2024-07-23 15:26:56.537485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:01.401 [2024-07-23 15:26:56.589894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.401 [2024-07-23 15:26:56.590100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.401 [2024-07-23 15:26:56.590912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:01.401 [2024-07-23 15:26:56.654617] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:01.968 15:26:57 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.968 15:26:57 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:33:01.968 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:33:01.968 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:33:01.968 15:26:57 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.968 15:26:57 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:33:01.968 15:26:57 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.968 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:33:01.968 "name": "app_thread", 00:33:01.968 "id": 1, 00:33:01.968 "active_pollers": [], 00:33:01.968 "timed_pollers": [ 00:33:01.968 { 00:33:01.968 "name": "rpc_subsystem_poll_servers", 00:33:01.968 "id": 1, 00:33:01.968 "state": "waiting", 00:33:01.968 "run_count": 0, 00:33:01.968 "busy_count": 0, 00:33:01.968 "period_ticks": 8400000 00:33:01.968 } 00:33:01.968 ], 00:33:01.968 "paused_pollers": [] 00:33:01.968 }' 00:33:01.968 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:33:01.968 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:33:01.968 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:33:01.968 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:33:01.969 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:33:01.969 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:33:01.969 15:26:57 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:33:01.969 15:26:57 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:01.969 15:26:57 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:33:02.227 5000+0 records in 00:33:02.227 5000+0 records out 00:33:02.227 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0271288 s, 377 MB/s 00:33:02.227 15:26:57 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:33:02.485 AIO0 00:33:02.485 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:02.485 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:33:02.743 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:33:02.743 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:33:02.744 15:26:57 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.744 15:26:57 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:33:02.744 15:26:57 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.744 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:33:02.744 "name": "app_thread", 00:33:02.744 "id": 1, 00:33:02.744 "active_pollers": [], 00:33:02.744 "timed_pollers": [ 00:33:02.744 { 00:33:02.744 "name": "rpc_subsystem_poll_servers", 00:33:02.744 "id": 1, 00:33:02.744 "state": "waiting", 00:33:02.744 "run_count": 0, 00:33:02.744 "busy_count": 0, 00:33:02.744 "period_ticks": 8400000 00:33:02.744 } 00:33:02.744 ], 00:33:02.744 "paused_pollers": [] 00:33:02.744 }' 00:33:02.744 15:26:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:33:02.744 15:26:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:33:02.744 15:26:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:33:02.744 15:26:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:33:02.744 15:26:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:33:02.744 15:26:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:33:02.744 15:26:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:33:02.744 15:26:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 125449 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 125449 ']' 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 125449 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125449 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:02.744 killing process with pid 125449 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125449' 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 125449 00:33:02.744 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 125449 00:33:03.002 15:26:58 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:33:03.002 15:26:58 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:33:03.002 00:33:03.002 real 0m2.273s 00:33:03.002 user 0m1.268s 00:33:03.002 sys 0m0.619s 00:33:03.002 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.002 ************************************ 00:33:03.002 15:26:58 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:33:03.002 END TEST reap_unregistered_poller 00:33:03.002 ************************************ 00:33:03.002 15:26:58 -- common/autotest_common.sh@1142 -- # return 0 00:33:03.002 15:26:58 -- spdk/autotest.sh@198 -- # uname -s 00:33:03.002 15:26:58 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:33:03.002 15:26:58 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:33:03.002 15:26:58 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:33:03.002 15:26:58 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:33:03.002 15:26:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:03.002 15:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.002 15:26:58 -- common/autotest_common.sh@10 -- # set +x 00:33:03.002 ************************************ 00:33:03.002 START TEST spdk_dd 00:33:03.002 ************************************ 00:33:03.002 15:26:58 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:33:03.261 * Looking for test storage... 00:33:03.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:33:03.261 15:26:58 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:03.261 15:26:58 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.261 15:26:58 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.261 15:26:58 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.261 15:26:58 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:03.261 15:26:58 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:03.261 15:26:58 spdk_dd -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:03.261 15:26:58 spdk_dd -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:03.261 15:26:58 spdk_dd -- paths/export.sh@6 -- # export PATH 00:33:03.261 15:26:58 spdk_dd -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:03.261 15:26:58 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:03.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:33:03.520 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:04.457 15:26:59 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:33:04.457 15:26:59 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@230 -- # local class 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@232 -- # local progif 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@233 -- # class=01 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:33:04.457 15:26:59 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@15 -- # local i 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@24 -- # return 0 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:33:04.458 15:26:59 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:33:04.458 15:26:59 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@139 -- # local lib 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:33:04.458 * spdk_dd linked to liburing 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:33:04.458 15:26:59 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=n 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@149 -- # [[ n != y ]] 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n' 00:33:04.458 * spdk_dd built with liburing, but no liburing support requested? 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:33:04.458 15:26:59 spdk_dd -- dd/common.sh@153 -- # return 0 00:33:04.458 15:26:59 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:33:04.458 15:26:59 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:33:04.458 15:26:59 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:04.458 15:26:59 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:04.458 15:26:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:33:04.458 ************************************ 00:33:04.458 START TEST spdk_dd_basic_rw 00:33:04.458 ************************************ 00:33:04.458 15:26:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:33:04.458 * Looking for test storage... 00:33:04.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:33:04.458 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # export PATH 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:33:04.459 15:26:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:33:04.720 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 111 Data Units Written: 7 Host Read Commands: 2399 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:33:04.720 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 111 Data Units Written: 7 Host Read Commands: 2399 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:33:04.721 ************************************ 00:33:04.721 START TEST dd_bs_lt_native_bs 00:33:04.721 ************************************ 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:04.721 { 00:33:04.721 "subsystems": [ 00:33:04.721 { 00:33:04.721 "subsystem": "bdev", 00:33:04.721 "config": [ 00:33:04.721 { 00:33:04.721 "params": { 00:33:04.721 "trtype": "pcie", 00:33:04.721 "traddr": "0000:00:10.0", 00:33:04.721 "name": "Nvme0" 00:33:04.721 }, 00:33:04.721 "method": "bdev_nvme_attach_controller" 00:33:04.721 }, 00:33:04.721 { 00:33:04.721 "method": "bdev_wait_for_examine" 00:33:04.721 } 00:33:04.721 ] 00:33:04.721 } 00:33:04.721 ] 00:33:04.721 } 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:04.721 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:33:04.980 [2024-07-23 15:27:00.210498] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:04.980 [2024-07-23 15:27:00.210670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125720 ] 00:33:04.980 [2024-07-23 15:27:00.368886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.239 [2024-07-23 15:27:00.423919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.239 [2024-07-23 15:27:00.583611] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:33:05.239 [2024-07-23 15:27:00.583693] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:05.499 [2024-07-23 15:27:00.699890] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:05.499 00:33:05.499 real 0m0.700s 00:33:05.499 user 0m0.373s 00:33:05.499 sys 0m0.250s 00:33:05.499 ************************************ 00:33:05.499 END TEST dd_bs_lt_native_bs 00:33:05.499 ************************************ 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:33:05.499 15:27:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:33:05.500 ************************************ 00:33:05.500 START TEST dd_rw 00:33:05.500 ************************************ 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:33:05.500 15:27:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:06.072 15:27:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:33:06.072 15:27:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:33:06.072 15:27:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:06.072 15:27:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:06.072 { 00:33:06.072 "subsystems": [ 00:33:06.072 { 00:33:06.072 "subsystem": "bdev", 00:33:06.072 "config": [ 00:33:06.072 { 00:33:06.072 "params": { 00:33:06.072 "trtype": "pcie", 00:33:06.072 "traddr": "0000:00:10.0", 00:33:06.072 "name": "Nvme0" 00:33:06.072 }, 00:33:06.072 "method": "bdev_nvme_attach_controller" 00:33:06.072 }, 00:33:06.072 { 00:33:06.072 "method": "bdev_wait_for_examine" 00:33:06.072 } 00:33:06.072 ] 00:33:06.072 } 00:33:06.072 ] 00:33:06.072 } 00:33:06.331 [2024-07-23 15:27:01.536643] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:06.331 [2024-07-23 15:27:01.536922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125752 ] 00:33:06.331 [2024-07-23 15:27:01.688920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.331 [2024-07-23 15:27:01.755997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.848  Copying: 60/60 [kB] (average 19 MBps) 00:33:06.848 00:33:06.848 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:33:06.848 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:33:06.848 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:06.848 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:06.848 { 00:33:06.848 "subsystems": [ 00:33:06.848 { 00:33:06.848 "subsystem": "bdev", 00:33:06.848 "config": [ 00:33:06.848 { 00:33:06.848 "params": { 00:33:06.848 "trtype": "pcie", 00:33:06.848 "traddr": "0000:00:10.0", 00:33:06.848 "name": "Nvme0" 00:33:06.848 }, 00:33:06.848 "method": "bdev_nvme_attach_controller" 00:33:06.848 }, 00:33:06.848 { 00:33:06.848 "method": "bdev_wait_for_examine" 00:33:06.848 } 00:33:06.848 ] 00:33:06.848 } 00:33:06.848 ] 00:33:06.848 } 00:33:06.848 [2024-07-23 15:27:02.264932] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:06.848 [2024-07-23 15:27:02.265167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125771 ] 00:33:07.105 [2024-07-23 15:27:02.413950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.105 [2024-07-23 15:27:02.459182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.620  Copying: 60/60 [kB] (average 29 MBps) 00:33:07.620 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:07.620 15:27:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:07.620 { 00:33:07.620 "subsystems": [ 00:33:07.620 { 00:33:07.620 "subsystem": "bdev", 00:33:07.620 "config": [ 00:33:07.620 { 00:33:07.620 "params": { 00:33:07.620 "trtype": "pcie", 00:33:07.620 "traddr": "0000:00:10.0", 00:33:07.620 "name": "Nvme0" 00:33:07.620 }, 00:33:07.620 "method": "bdev_nvme_attach_controller" 00:33:07.620 }, 00:33:07.620 { 00:33:07.620 "method": "bdev_wait_for_examine" 00:33:07.620 } 00:33:07.620 ] 00:33:07.620 } 00:33:07.620 ] 00:33:07.620 } 00:33:07.620 [2024-07-23 15:27:02.934478] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:07.620 [2024-07-23 15:27:02.934666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125786 ] 00:33:07.877 [2024-07-23 15:27:03.087199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.877 [2024-07-23 15:27:03.135698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.134  Copying: 1024/1024 [kB] (average 500 MBps) 00:33:08.134 00:33:08.134 15:27:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:33:08.134 15:27:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:33:08.134 15:27:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:33:08.134 15:27:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:33:08.134 15:27:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:33:08.134 15:27:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:33:08.134 15:27:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:08.700 15:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:33:08.700 15:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:33:08.700 15:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:08.700 15:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:08.700 { 00:33:08.700 "subsystems": [ 00:33:08.700 { 00:33:08.700 "subsystem": "bdev", 00:33:08.700 "config": [ 00:33:08.700 { 00:33:08.700 "params": { 00:33:08.700 "trtype": "pcie", 00:33:08.700 "traddr": "0000:00:10.0", 00:33:08.700 "name": "Nvme0" 00:33:08.700 }, 00:33:08.700 "method": "bdev_nvme_attach_controller" 00:33:08.700 }, 00:33:08.700 { 00:33:08.700 "method": "bdev_wait_for_examine" 00:33:08.700 } 00:33:08.700 ] 00:33:08.700 } 00:33:08.700 ] 00:33:08.700 } 00:33:08.958 [2024-07-23 15:27:04.134900] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:08.958 [2024-07-23 15:27:04.135083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125805 ] 00:33:08.958 [2024-07-23 15:27:04.287647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.958 [2024-07-23 15:27:04.333433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.471  Copying: 60/60 [kB] (average 58 MBps) 00:33:09.471 00:33:09.471 15:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:33:09.471 15:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:33:09.471 15:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:09.471 15:27:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:09.471 { 00:33:09.471 "subsystems": [ 00:33:09.471 { 00:33:09.471 "subsystem": "bdev", 00:33:09.471 "config": [ 00:33:09.471 { 00:33:09.471 "params": { 00:33:09.471 "trtype": "pcie", 00:33:09.471 "traddr": "0000:00:10.0", 00:33:09.471 "name": "Nvme0" 00:33:09.471 }, 00:33:09.471 "method": "bdev_nvme_attach_controller" 00:33:09.471 }, 00:33:09.471 { 00:33:09.471 "method": "bdev_wait_for_examine" 00:33:09.471 } 00:33:09.471 ] 00:33:09.471 } 00:33:09.471 ] 00:33:09.471 } 00:33:09.471 [2024-07-23 15:27:04.801856] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:09.471 [2024-07-23 15:27:04.802269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125818 ] 00:33:09.728 [2024-07-23 15:27:04.955186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.728 [2024-07-23 15:27:05.003178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.986  Copying: 60/60 [kB] (average 58 MBps) 00:33:09.986 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:09.986 15:27:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:10.245 { 00:33:10.245 "subsystems": [ 00:33:10.245 { 00:33:10.245 "subsystem": "bdev", 00:33:10.245 "config": [ 00:33:10.245 { 00:33:10.245 "params": { 00:33:10.245 "trtype": "pcie", 00:33:10.245 "traddr": "0000:00:10.0", 00:33:10.245 "name": "Nvme0" 00:33:10.245 }, 00:33:10.245 "method": "bdev_nvme_attach_controller" 00:33:10.245 }, 00:33:10.245 { 00:33:10.245 "method": "bdev_wait_for_examine" 00:33:10.245 } 00:33:10.245 ] 00:33:10.245 } 00:33:10.245 ] 00:33:10.245 } 00:33:10.245 [2024-07-23 15:27:05.483649] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:10.245 [2024-07-23 15:27:05.483912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125833 ] 00:33:10.245 [2024-07-23 15:27:05.635554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.504 [2024-07-23 15:27:05.682839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.763  Copying: 1024/1024 [kB] (average 1000 MBps) 00:33:10.763 00:33:10.763 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:33:10.763 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:33:10.763 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:33:10.763 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:33:10.763 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:33:10.763 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:33:10.763 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:33:10.763 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:11.331 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:33:11.331 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:33:11.331 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:11.331 15:27:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:11.331 { 00:33:11.331 "subsystems": [ 00:33:11.331 { 00:33:11.331 "subsystem": "bdev", 00:33:11.331 "config": [ 00:33:11.331 { 00:33:11.331 "params": { 00:33:11.331 "trtype": "pcie", 00:33:11.331 "traddr": "0000:00:10.0", 00:33:11.331 "name": "Nvme0" 00:33:11.331 }, 00:33:11.331 "method": "bdev_nvme_attach_controller" 00:33:11.331 }, 00:33:11.331 { 00:33:11.331 "method": "bdev_wait_for_examine" 00:33:11.331 } 00:33:11.331 ] 00:33:11.331 } 00:33:11.331 ] 00:33:11.331 } 00:33:11.331 [2024-07-23 15:27:06.652547] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:11.331 [2024-07-23 15:27:06.652988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125852 ] 00:33:11.590 [2024-07-23 15:27:06.804287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.590 [2024-07-23 15:27:06.852670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.849  Copying: 56/56 [kB] (average 54 MBps) 00:33:11.849 00:33:11.849 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:33:11.849 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:33:11.849 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:11.849 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:11.849 { 00:33:11.849 "subsystems": [ 00:33:11.849 { 00:33:11.849 "subsystem": "bdev", 00:33:11.849 "config": [ 00:33:11.849 { 00:33:11.849 "params": { 00:33:11.849 "trtype": "pcie", 00:33:11.849 "traddr": "0000:00:10.0", 00:33:11.849 "name": "Nvme0" 00:33:11.849 }, 00:33:11.849 "method": "bdev_nvme_attach_controller" 00:33:11.849 }, 00:33:11.849 { 00:33:11.849 "method": "bdev_wait_for_examine" 00:33:11.849 } 00:33:11.849 ] 00:33:11.849 } 00:33:11.849 ] 00:33:11.849 } 00:33:12.108 [2024-07-23 15:27:07.325485] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:12.108 [2024-07-23 15:27:07.325701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125870 ] 00:33:12.108 [2024-07-23 15:27:07.476547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.108 [2024-07-23 15:27:07.525841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.626  Copying: 56/56 [kB] (average 54 MBps) 00:33:12.626 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:12.626 15:27:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:12.626 { 00:33:12.626 "subsystems": [ 00:33:12.626 { 00:33:12.626 "subsystem": "bdev", 00:33:12.626 "config": [ 00:33:12.626 { 00:33:12.626 "params": { 00:33:12.626 "trtype": "pcie", 00:33:12.626 "traddr": "0000:00:10.0", 00:33:12.626 "name": "Nvme0" 00:33:12.626 }, 00:33:12.626 "method": "bdev_nvme_attach_controller" 00:33:12.626 }, 00:33:12.626 { 00:33:12.626 "method": "bdev_wait_for_examine" 00:33:12.626 } 00:33:12.626 ] 00:33:12.626 } 00:33:12.626 ] 00:33:12.626 } 00:33:12.626 [2024-07-23 15:27:07.992901] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:12.626 [2024-07-23 15:27:07.993103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125886 ] 00:33:12.885 [2024-07-23 15:27:08.142981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.885 [2024-07-23 15:27:08.190332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.142  Copying: 1024/1024 [kB] (average 1000 MBps) 00:33:13.142 00:33:13.400 15:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:33:13.400 15:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:33:13.400 15:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:33:13.400 15:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:33:13.400 15:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:33:13.400 15:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:33:13.400 15:27:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:13.967 15:27:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:33:13.967 15:27:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:33:13.967 15:27:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:13.967 15:27:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:13.967 { 00:33:13.967 "subsystems": [ 00:33:13.967 { 00:33:13.967 "subsystem": "bdev", 00:33:13.967 "config": [ 00:33:13.967 { 00:33:13.967 "params": { 00:33:13.967 "trtype": "pcie", 00:33:13.967 "traddr": "0000:00:10.0", 00:33:13.967 "name": "Nvme0" 00:33:13.967 }, 00:33:13.967 "method": "bdev_nvme_attach_controller" 00:33:13.967 }, 00:33:13.967 { 00:33:13.967 "method": "bdev_wait_for_examine" 00:33:13.967 } 00:33:13.967 ] 00:33:13.967 } 00:33:13.967 ] 00:33:13.967 } 00:33:13.967 [2024-07-23 15:27:09.160299] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:13.967 [2024-07-23 15:27:09.160482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125905 ] 00:33:13.967 [2024-07-23 15:27:09.311510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.967 [2024-07-23 15:27:09.356434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.503  Copying: 56/56 [kB] (average 54 MBps) 00:33:14.503 00:33:14.503 15:27:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:33:14.503 15:27:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:33:14.503 15:27:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:14.503 15:27:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:14.503 { 00:33:14.503 "subsystems": [ 00:33:14.503 { 00:33:14.503 "subsystem": "bdev", 00:33:14.503 "config": [ 00:33:14.503 { 00:33:14.503 "params": { 00:33:14.503 "trtype": "pcie", 00:33:14.503 "traddr": "0000:00:10.0", 00:33:14.503 "name": "Nvme0" 00:33:14.503 }, 00:33:14.503 "method": "bdev_nvme_attach_controller" 00:33:14.503 }, 00:33:14.503 { 00:33:14.503 "method": "bdev_wait_for_examine" 00:33:14.503 } 00:33:14.503 ] 00:33:14.503 } 00:33:14.503 ] 00:33:14.503 } 00:33:14.503 [2024-07-23 15:27:09.820491] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:14.503 [2024-07-23 15:27:09.820671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125917 ] 00:33:14.761 [2024-07-23 15:27:09.967666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.761 [2024-07-23 15:27:10.015001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.019  Copying: 56/56 [kB] (average 54 MBps) 00:33:15.019 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:15.019 15:27:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:15.019 { 00:33:15.019 "subsystems": [ 00:33:15.019 { 00:33:15.019 "subsystem": "bdev", 00:33:15.019 "config": [ 00:33:15.019 { 00:33:15.019 "params": { 00:33:15.019 "trtype": "pcie", 00:33:15.019 "traddr": "0000:00:10.0", 00:33:15.019 "name": "Nvme0" 00:33:15.019 }, 00:33:15.019 "method": "bdev_nvme_attach_controller" 00:33:15.019 }, 00:33:15.019 { 00:33:15.019 "method": "bdev_wait_for_examine" 00:33:15.019 } 00:33:15.019 ] 00:33:15.019 } 00:33:15.019 ] 00:33:15.019 } 00:33:15.277 [2024-07-23 15:27:10.483361] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:15.277 [2024-07-23 15:27:10.483542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125933 ] 00:33:15.277 [2024-07-23 15:27:10.636228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.277 [2024-07-23 15:27:10.683743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.794  Copying: 1024/1024 [kB] (average 1000 MBps) 00:33:15.794 00:33:15.794 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:33:15.794 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:33:15.794 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:33:15.794 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:33:15.794 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:33:15.794 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:33:15.794 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:33:15.794 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:16.360 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:33:16.360 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:33:16.360 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:16.360 15:27:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:16.360 { 00:33:16.360 "subsystems": [ 00:33:16.360 { 00:33:16.360 "subsystem": "bdev", 00:33:16.360 "config": [ 00:33:16.360 { 00:33:16.360 "params": { 00:33:16.360 "trtype": "pcie", 00:33:16.360 "traddr": "0000:00:10.0", 00:33:16.360 "name": "Nvme0" 00:33:16.360 }, 00:33:16.360 "method": "bdev_nvme_attach_controller" 00:33:16.360 }, 00:33:16.360 { 00:33:16.360 "method": "bdev_wait_for_examine" 00:33:16.360 } 00:33:16.360 ] 00:33:16.360 } 00:33:16.360 ] 00:33:16.360 } 00:33:16.360 [2024-07-23 15:27:11.625895] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:16.360 [2024-07-23 15:27:11.626293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125952 ] 00:33:16.360 [2024-07-23 15:27:11.779640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.619 [2024-07-23 15:27:11.827741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.877  Copying: 48/48 [kB] (average 46 MBps) 00:33:16.877 00:33:16.877 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:33:16.877 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:33:16.877 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:16.877 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:16.877 { 00:33:16.877 "subsystems": [ 00:33:16.877 { 00:33:16.877 "subsystem": "bdev", 00:33:16.877 "config": [ 00:33:16.877 { 00:33:16.877 "params": { 00:33:16.877 "trtype": "pcie", 00:33:16.877 "traddr": "0000:00:10.0", 00:33:16.877 "name": "Nvme0" 00:33:16.877 }, 00:33:16.877 "method": "bdev_nvme_attach_controller" 00:33:16.877 }, 00:33:16.877 { 00:33:16.877 "method": "bdev_wait_for_examine" 00:33:16.877 } 00:33:16.877 ] 00:33:16.877 } 00:33:16.877 ] 00:33:16.877 } 00:33:16.877 [2024-07-23 15:27:12.286104] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:16.877 [2024-07-23 15:27:12.286308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125970 ] 00:33:17.135 [2024-07-23 15:27:12.439736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.135 [2024-07-23 15:27:12.486708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.651  Copying: 48/48 [kB] (average 46 MBps) 00:33:17.651 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:17.651 15:27:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:17.651 { 00:33:17.651 "subsystems": [ 00:33:17.651 { 00:33:17.651 "subsystem": "bdev", 00:33:17.651 "config": [ 00:33:17.651 { 00:33:17.651 "params": { 00:33:17.651 "trtype": "pcie", 00:33:17.651 "traddr": "0000:00:10.0", 00:33:17.651 "name": "Nvme0" 00:33:17.651 }, 00:33:17.651 "method": "bdev_nvme_attach_controller" 00:33:17.651 }, 00:33:17.651 { 00:33:17.651 "method": "bdev_wait_for_examine" 00:33:17.651 } 00:33:17.651 ] 00:33:17.651 } 00:33:17.651 ] 00:33:17.651 } 00:33:17.651 [2024-07-23 15:27:12.960161] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:17.651 [2024-07-23 15:27:12.960352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125980 ] 00:33:17.909 [2024-07-23 15:27:13.111269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.909 [2024-07-23 15:27:13.156128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.167  Copying: 1024/1024 [kB] (average 1000 MBps) 00:33:18.167 00:33:18.167 15:27:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:33:18.167 15:27:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:33:18.167 15:27:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:33:18.167 15:27:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:33:18.167 15:27:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:33:18.167 15:27:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:33:18.167 15:27:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:18.732 15:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:33:18.732 15:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:33:18.732 15:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:18.732 15:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:18.732 { 00:33:18.732 "subsystems": [ 00:33:18.732 { 00:33:18.732 "subsystem": "bdev", 00:33:18.732 "config": [ 00:33:18.732 { 00:33:18.732 "params": { 00:33:18.732 "trtype": "pcie", 00:33:18.732 "traddr": "0000:00:10.0", 00:33:18.732 "name": "Nvme0" 00:33:18.732 }, 00:33:18.732 "method": "bdev_nvme_attach_controller" 00:33:18.732 }, 00:33:18.732 { 00:33:18.732 "method": "bdev_wait_for_examine" 00:33:18.732 } 00:33:18.732 ] 00:33:18.732 } 00:33:18.732 ] 00:33:18.732 } 00:33:18.732 [2024-07-23 15:27:14.085214] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:18.732 [2024-07-23 15:27:14.085597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125999 ] 00:33:18.990 [2024-07-23 15:27:14.236141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.990 [2024-07-23 15:27:14.281383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.249  Copying: 48/48 [kB] (average 46 MBps) 00:33:19.249 00:33:19.249 15:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:33:19.249 15:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:33:19.249 15:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:19.249 15:27:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:19.508 { 00:33:19.508 "subsystems": [ 00:33:19.508 { 00:33:19.508 "subsystem": "bdev", 00:33:19.508 "config": [ 00:33:19.508 { 00:33:19.508 "params": { 00:33:19.508 "trtype": "pcie", 00:33:19.508 "traddr": "0000:00:10.0", 00:33:19.508 "name": "Nvme0" 00:33:19.508 }, 00:33:19.508 "method": "bdev_nvme_attach_controller" 00:33:19.508 }, 00:33:19.508 { 00:33:19.508 "method": "bdev_wait_for_examine" 00:33:19.508 } 00:33:19.508 ] 00:33:19.508 } 00:33:19.508 ] 00:33:19.508 } 00:33:19.508 [2024-07-23 15:27:14.751535] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:19.508 [2024-07-23 15:27:14.751718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126013 ] 00:33:19.508 [2024-07-23 15:27:14.903403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.766 [2024-07-23 15:27:14.949777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.025  Copying: 48/48 [kB] (average 46 MBps) 00:33:20.025 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:20.025 15:27:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:20.025 { 00:33:20.025 "subsystems": [ 00:33:20.025 { 00:33:20.025 "subsystem": "bdev", 00:33:20.025 "config": [ 00:33:20.025 { 00:33:20.025 "params": { 00:33:20.025 "trtype": "pcie", 00:33:20.025 "traddr": "0000:00:10.0", 00:33:20.025 "name": "Nvme0" 00:33:20.025 }, 00:33:20.025 "method": "bdev_nvme_attach_controller" 00:33:20.025 }, 00:33:20.025 { 00:33:20.025 "method": "bdev_wait_for_examine" 00:33:20.025 } 00:33:20.025 ] 00:33:20.025 } 00:33:20.025 ] 00:33:20.025 } 00:33:20.025 [2024-07-23 15:27:15.420237] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:20.025 [2024-07-23 15:27:15.420615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126033 ] 00:33:20.284 [2024-07-23 15:27:15.571438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.284 [2024-07-23 15:27:15.619341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.803  Copying: 1024/1024 [kB] (average 1000 MBps) 00:33:20.803 00:33:20.803 00:33:20.803 real 0m15.132s 00:33:20.803 user 0m9.137s 00:33:20.803 sys 0m4.332s 00:33:20.803 ************************************ 00:33:20.803 END TEST dd_rw 00:33:20.803 ************************************ 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:33:20.803 ************************************ 00:33:20.803 START TEST dd_rw_offset 00:33:20.803 ************************************ 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:33:20.803 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:33:20.804 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=tua831bj9yaffal1guvyachbqj0zfoowiyc1ycr6lkdl3r6lafsn05w1nv9oqhekbondjk3x9mot38fqdx3qwvglcexn21xq0z2vyu794n7dzq2myn82tmvscb73rv5dry91u9pin24jj7bqxgpo293t0xgzxltcyhrvl5ofqahhg5ip8m8s56icg46yek4cjl2ve3j6yhcrs2iwnpe8wqg8azwyya1pi7w5yaifueryohtz09u56ld193z5vr2rxvin75ykv4nboen8xxpuo569e3r74v4v9cbg8susru6m3741gxvyf12lemwuzol1l6rfclotrk2bgzy86sqjpy7hrow01g8q6izbjkhqt6txbeu3oow6frt02lvtnofdt76iunmgfnplnm3skk2ve95vblqkg4029lra5c1v95c86vpebqgv3ak4riustvspbeukub63gc7f8cv3g9j25yk5yufgb4rw9bwed7rsdcdjfnuzy7pf4aaw2a5b7kprd3xqnr9ub2e7pkavcdm0jzi50a4e1ct3rgtl92fnk4iytz0kubts1rnubpcci0pturz5ct0p4kt7lnxtsalmxbhah4cq7147pcdybli75g1kug9bz1dyimtrlaaa90lw4tcpqx0s2jiwo7zdlrme35ut8ep371jibfzdscf4w31a2gmeua1uwlzxos5zd6q1cx553hup810fxtgdc9n55l2ooe8vl9ol81uu052czeofqhro8qkhsyhdl19pjko7f3sgrfplu5ulz6j183ovfwrbflrq8xqfgpvnqq2nh1sukhtebk92t3i8xocw7s3cgnpepyyxn03zqo1db83l577hjcmf01opemj2atqpohsko5ssw8ddllm82z82wf2nbk0cm8hlro33j05uj1h0p5klh0wrfe71gxlxd6353k6tiatjpawuncae5h56vjw5j4honbn1et8lgmm8o90cbflywhqkai8wa9942598x6oonoiel8083km5ht6v1tkziqzqulj9byjyfsbiysgne7ynbtko3evtdfwvetdzi9u8cfv2qrvxjbrbenzz4189pb1fxhgvw6xequ6no0meaz0a94v01q9mkfiq1vwsrjuwcxonuvn34uue71kzdsih0k6n6eyc30orhep5ijc7z7ya7bush2pwoz9pdt7pd5xng4twm6z5b0jjz8y654k0a1bgbo6ggz8rdub28fjx2w5qu11n9izou7oo4ked607dtydebs7u5cv9bx3uqachcsxt730am6rvrelkinwhg5xpsibpbfvr0q9k229u1rpttfllfpp857zvvk75qaq8c6j8kvee1p18j0kl99bsrnv29zlli48boqehtsj5irh78nwbk7no2wpqx8rjn286ytno0k2slcgb8hgwhib45aj5ia1bugsp26vzisv0fh56az9vgrr3hy2u9oycchv4nbhofgj89tmlfndbvelgix9ybnmm6gv00czeor1fifkxhyf1j1qg2czxjferpr40m4nhhjcwh0ckkm48iwqvmtuqn9qgrq363hlnxexstr0we2duv9vhofpoxk5k2ijke27rovnnu590sg55vivleatwh8k5ppynoh6xslw0c7dv1obe3mg5q08eh5ulkj1jzdw9xe8uuap71jc667k8zuncaxtlfbgbnyuarszcw5586xepftb9frjs0hp2vpo7c2kpob27juaeojbrj887ygmo6mn2garid28wy155atr8xczj4drmlekshmo1d9btkezpkigz803mqocchx2frv8ttpwszjgu9lamlpa60i0lqrlh9rjld49odb0intsnj03wetlqv89xybtzyld1rqt9n8hvlby584ge7145p8kb1ukpjege1q039tlo0xftka7cmzan4iss0zbanrfh34m8jjkpw59li0rzvtgfkrrnhftl1e1mgpav29xeiv4xakpafqpsb6pw41xlu8xmkeb2rji0hi1o124xr79qvdp89l4vqpnnyv817g4wxaspfoi85le8qm3c5tigehyl1ub55y7ko3003g5hp8m5etap8vanpp964ps72nrqr2xdaskfmmviunodprxiqnelakxj9mj1m1wmfig3156nw90i9u3m8kc29duv1zu5odz377j3kly9b03igp044id7v4s1fh3vyrpojbvigzn1exr97auawy1sp4iy89qqjyi0rq93sp8274z1fvgzg94w28ogl7my4s6q3b2daqeuuc3udv0rjhwj89jj2y5w2b84uav58o8yh9fh16f813abkmbwb4dhwopveinxa9dceanujci7o8vxv072yyntzd84my05t12ztsdctkjbyiolutupgv3c370fqr6hxzugl4kasl07vyyz7dc7sr1j9am70ypftiey8ivgx6w841zav85mp1dc5uwg1lcuocffsm77o2940g0j75pfa0mxnza6qj1rygkb5ihc1pj01w3l5oa12rs3i5irawspn8vgmdlq396qb4708jkqj0b6o133j2sixx6cjskx6n44gzdlnu5a9k3kq6yh2f67d39exgg77qmk6m000pob4j68h652699ozte0doe6z0wagyzwz904e0r9t5171c69x3uu9becghjp9r3mjrwuwpd4dgogcyfc6f0me4fg9fqlwilrm597su3x7ahufhcn4rye52zvqcwcz7yzvdcf5rz7fhzfxgdj0w56nm3xmvrrprrf85d0moq3r1otu2oausuonbp8w9o4yeqw7nfyt41a6qyzuy5pg7u4e2c03ekpo4minac4891m8pi18niu1bdwqatfs02ynzngeyggmyyye5u7uldm5ku6cct45pf1lzn4y5y4znbho5682ritnotvjyc9tyc2983xrovfkx7mgce7l57m416im7dk3399lssc9a2ilg4thpvyhqw0bevuc0q6jmhn79ds59419e0olpoawvdmzwk6sc5jzf4mm0i8rsaomtdb4azkg275fiiuhkglmdgfa3m74ov8nlrfrxzgh99w2ea71bg7cdnr2tstfq2jbb6yt3ldho8upbiluk71182voo9kdciii47m2eux0pyxlulmfsgmus8gjpimg1h4mcirrxn3y04cxica2maqcam9fv7b2f48b1mgd9fjrolnrt4wfh7r1ryvt0d25534k8k5d1s470ejiwlw449yy4jx94b0n9ond6cakbqvfsc4v4cidwp6aupea8gl7tobogti9jvdkjngtzfp0d3kfny7zastwwr99v9l6bkhqq8n0ldct5uxhmzy4l2k6yz412vzk4320ew4p2lfqtj2w0lk78d0hfytgpg2md6s8ygml8pb4yu9pwq1gje0oiy5zaej49d9f12go9lhjutz759o7pyaob2rt3qxeuddyn4ckd2mv82zsor9qgfaihqijfm5w78j4tdfra59qvsicnujbo3wr2f729m5ue4cwyxujehil2b6zfxhkay4ltwqyhg2438c7ezjrhdjx5aumt6nb0pav780v69aipibpt27z5w61aufz1e0lf1gxaitkk6tejuvrlj2i3hq67j5itdy0qe1sjyb9084m61z6dsvazo4emed6t54tqw00x38cf5lxkq6a9c5jw3zc62csk7q8pg84bfyzd60asrvgx14pi4dnktt58v19ep7cudv1jkpmzdh40pch5cthhtun8ywbiwlfgxu9gs02b9y5vqa2ut6zbvsueu22hbdolz2nqwsog9fz64ejikg1s9mfiwhw24wg2uae8mki2d32g8015clj8netg3664fal3pspnawp5as97zyw4vgxcj2bkj75l4n3j82xwjeaur1xsnswkwyq18d380gmfii41k3gyi0f7t2lld14hpneev89votk4ozkhi2jwxvgzg3stk2hevqf4ibtyp6y1z4hdyudz815ynwbrzxu5ctnfew9ggcew7o7bmpsj2f681y2wa50km6gxg6jkkhvoz8tgpuq6o027p9jjx2z59gudeyp66fq8j65q148htuldgp 00:33:20.804 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:33:20.804 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:33:20.804 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:33:20.804 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:33:20.804 { 00:33:20.804 "subsystems": [ 00:33:20.804 { 00:33:20.804 "subsystem": "bdev", 00:33:20.804 "config": [ 00:33:20.804 { 00:33:20.804 "params": { 00:33:20.804 "trtype": "pcie", 00:33:20.804 "traddr": "0000:00:10.0", 00:33:20.804 "name": "Nvme0" 00:33:20.804 }, 00:33:20.804 "method": "bdev_nvme_attach_controller" 00:33:20.804 }, 00:33:20.804 { 00:33:20.804 "method": "bdev_wait_for_examine" 00:33:20.804 } 00:33:20.804 ] 00:33:20.804 } 00:33:20.804 ] 00:33:20.804 } 00:33:20.804 [2024-07-23 15:27:16.194346] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:20.804 [2024-07-23 15:27:16.194527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126068 ] 00:33:21.063 [2024-07-23 15:27:16.347671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.063 [2024-07-23 15:27:16.392194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.581  Copying: 4096/4096 [B] (average 4000 kBps) 00:33:21.581 00:33:21.581 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:33:21.581 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:33:21.581 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:33:21.581 15:27:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:33:21.581 { 00:33:21.581 "subsystems": [ 00:33:21.581 { 00:33:21.581 "subsystem": "bdev", 00:33:21.581 "config": [ 00:33:21.581 { 00:33:21.581 "params": { 00:33:21.581 "trtype": "pcie", 00:33:21.581 "traddr": "0000:00:10.0", 00:33:21.581 "name": "Nvme0" 00:33:21.581 }, 00:33:21.581 "method": "bdev_nvme_attach_controller" 00:33:21.581 }, 00:33:21.581 { 00:33:21.581 "method": "bdev_wait_for_examine" 00:33:21.581 } 00:33:21.581 ] 00:33:21.581 } 00:33:21.581 ] 00:33:21.581 } 00:33:21.581 [2024-07-23 15:27:16.858121] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:21.581 [2024-07-23 15:27:16.858326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126077 ] 00:33:21.581 [2024-07-23 15:27:17.011044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.840 [2024-07-23 15:27:17.058402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.099  Copying: 4096/4096 [B] (average 4000 kBps) 00:33:22.099 00:33:22.099 15:27:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ tua831bj9yaffal1guvyachbqj0zfoowiyc1ycr6lkdl3r6lafsn05w1nv9oqhekbondjk3x9mot38fqdx3qwvglcexn21xq0z2vyu794n7dzq2myn82tmvscb73rv5dry91u9pin24jj7bqxgpo293t0xgzxltcyhrvl5ofqahhg5ip8m8s56icg46yek4cjl2ve3j6yhcrs2iwnpe8wqg8azwyya1pi7w5yaifueryohtz09u56ld193z5vr2rxvin75ykv4nboen8xxpuo569e3r74v4v9cbg8susru6m3741gxvyf12lemwuzol1l6rfclotrk2bgzy86sqjpy7hrow01g8q6izbjkhqt6txbeu3oow6frt02lvtnofdt76iunmgfnplnm3skk2ve95vblqkg4029lra5c1v95c86vpebqgv3ak4riustvspbeukub63gc7f8cv3g9j25yk5yufgb4rw9bwed7rsdcdjfnuzy7pf4aaw2a5b7kprd3xqnr9ub2e7pkavcdm0jzi50a4e1ct3rgtl92fnk4iytz0kubts1rnubpcci0pturz5ct0p4kt7lnxtsalmxbhah4cq7147pcdybli75g1kug9bz1dyimtrlaaa90lw4tcpqx0s2jiwo7zdlrme35ut8ep371jibfzdscf4w31a2gmeua1uwlzxos5zd6q1cx553hup810fxtgdc9n55l2ooe8vl9ol81uu052czeofqhro8qkhsyhdl19pjko7f3sgrfplu5ulz6j183ovfwrbflrq8xqfgpvnqq2nh1sukhtebk92t3i8xocw7s3cgnpepyyxn03zqo1db83l577hjcmf01opemj2atqpohsko5ssw8ddllm82z82wf2nbk0cm8hlro33j05uj1h0p5klh0wrfe71gxlxd6353k6tiatjpawuncae5h56vjw5j4honbn1et8lgmm8o90cbflywhqkai8wa9942598x6oonoiel8083km5ht6v1tkziqzqulj9byjyfsbiysgne7ynbtko3evtdfwvetdzi9u8cfv2qrvxjbrbenzz4189pb1fxhgvw6xequ6no0meaz0a94v01q9mkfiq1vwsrjuwcxonuvn34uue71kzdsih0k6n6eyc30orhep5ijc7z7ya7bush2pwoz9pdt7pd5xng4twm6z5b0jjz8y654k0a1bgbo6ggz8rdub28fjx2w5qu11n9izou7oo4ked607dtydebs7u5cv9bx3uqachcsxt730am6rvrelkinwhg5xpsibpbfvr0q9k229u1rpttfllfpp857zvvk75qaq8c6j8kvee1p18j0kl99bsrnv29zlli48boqehtsj5irh78nwbk7no2wpqx8rjn286ytno0k2slcgb8hgwhib45aj5ia1bugsp26vzisv0fh56az9vgrr3hy2u9oycchv4nbhofgj89tmlfndbvelgix9ybnmm6gv00czeor1fifkxhyf1j1qg2czxjferpr40m4nhhjcwh0ckkm48iwqvmtuqn9qgrq363hlnxexstr0we2duv9vhofpoxk5k2ijke27rovnnu590sg55vivleatwh8k5ppynoh6xslw0c7dv1obe3mg5q08eh5ulkj1jzdw9xe8uuap71jc667k8zuncaxtlfbgbnyuarszcw5586xepftb9frjs0hp2vpo7c2kpob27juaeojbrj887ygmo6mn2garid28wy155atr8xczj4drmlekshmo1d9btkezpkigz803mqocchx2frv8ttpwszjgu9lamlpa60i0lqrlh9rjld49odb0intsnj03wetlqv89xybtzyld1rqt9n8hvlby584ge7145p8kb1ukpjege1q039tlo0xftka7cmzan4iss0zbanrfh34m8jjkpw59li0rzvtgfkrrnhftl1e1mgpav29xeiv4xakpafqpsb6pw41xlu8xmkeb2rji0hi1o124xr79qvdp89l4vqpnnyv817g4wxaspfoi85le8qm3c5tigehyl1ub55y7ko3003g5hp8m5etap8vanpp964ps72nrqr2xdaskfmmviunodprxiqnelakxj9mj1m1wmfig3156nw90i9u3m8kc29duv1zu5odz377j3kly9b03igp044id7v4s1fh3vyrpojbvigzn1exr97auawy1sp4iy89qqjyi0rq93sp8274z1fvgzg94w28ogl7my4s6q3b2daqeuuc3udv0rjhwj89jj2y5w2b84uav58o8yh9fh16f813abkmbwb4dhwopveinxa9dceanujci7o8vxv072yyntzd84my05t12ztsdctkjbyiolutupgv3c370fqr6hxzugl4kasl07vyyz7dc7sr1j9am70ypftiey8ivgx6w841zav85mp1dc5uwg1lcuocffsm77o2940g0j75pfa0mxnza6qj1rygkb5ihc1pj01w3l5oa12rs3i5irawspn8vgmdlq396qb4708jkqj0b6o133j2sixx6cjskx6n44gzdlnu5a9k3kq6yh2f67d39exgg77qmk6m000pob4j68h652699ozte0doe6z0wagyzwz904e0r9t5171c69x3uu9becghjp9r3mjrwuwpd4dgogcyfc6f0me4fg9fqlwilrm597su3x7ahufhcn4rye52zvqcwcz7yzvdcf5rz7fhzfxgdj0w56nm3xmvrrprrf85d0moq3r1otu2oausuonbp8w9o4yeqw7nfyt41a6qyzuy5pg7u4e2c03ekpo4minac4891m8pi18niu1bdwqatfs02ynzngeyggmyyye5u7uldm5ku6cct45pf1lzn4y5y4znbho5682ritnotvjyc9tyc2983xrovfkx7mgce7l57m416im7dk3399lssc9a2ilg4thpvyhqw0bevuc0q6jmhn79ds59419e0olpoawvdmzwk6sc5jzf4mm0i8rsaomtdb4azkg275fiiuhkglmdgfa3m74ov8nlrfrxzgh99w2ea71bg7cdnr2tstfq2jbb6yt3ldho8upbiluk71182voo9kdciii47m2eux0pyxlulmfsgmus8gjpimg1h4mcirrxn3y04cxica2maqcam9fv7b2f48b1mgd9fjrolnrt4wfh7r1ryvt0d25534k8k5d1s470ejiwlw449yy4jx94b0n9ond6cakbqvfsc4v4cidwp6aupea8gl7tobogti9jvdkjngtzfp0d3kfny7zastwwr99v9l6bkhqq8n0ldct5uxhmzy4l2k6yz412vzk4320ew4p2lfqtj2w0lk78d0hfytgpg2md6s8ygml8pb4yu9pwq1gje0oiy5zaej49d9f12go9lhjutz759o7pyaob2rt3qxeuddyn4ckd2mv82zsor9qgfaihqijfm5w78j4tdfra59qvsicnujbo3wr2f729m5ue4cwyxujehil2b6zfxhkay4ltwqyhg2438c7ezjrhdjx5aumt6nb0pav780v69aipibpt27z5w61aufz1e0lf1gxaitkk6tejuvrlj2i3hq67j5itdy0qe1sjyb9084m61z6dsvazo4emed6t54tqw00x38cf5lxkq6a9c5jw3zc62csk7q8pg84bfyzd60asrvgx14pi4dnktt58v19ep7cudv1jkpmzdh40pch5cthhtun8ywbiwlfgxu9gs02b9y5vqa2ut6zbvsueu22hbdolz2nqwsog9fz64ejikg1s9mfiwhw24wg2uae8mki2d32g8015clj8netg3664fal3pspnawp5as97zyw4vgxcj2bkj75l4n3j82xwjeaur1xsnswkwyq18d380gmfii41k3gyi0f7t2lld14hpneev89votk4ozkhi2jwxvgzg3stk2hevqf4ibtyp6y1z4hdyudz815ynwbrzxu5ctnfew9ggcew7o7bmpsj2f681y2wa50km6gxg6jkkhvoz8tgpuq6o027p9jjx2z59gudeyp66fq8j65q148htuldgp == \t\u\a\8\3\1\b\j\9\y\a\f\f\a\l\1\g\u\v\y\a\c\h\b\q\j\0\z\f\o\o\w\i\y\c\1\y\c\r\6\l\k\d\l\3\r\6\l\a\f\s\n\0\5\w\1\n\v\9\o\q\h\e\k\b\o\n\d\j\k\3\x\9\m\o\t\3\8\f\q\d\x\3\q\w\v\g\l\c\e\x\n\2\1\x\q\0\z\2\v\y\u\7\9\4\n\7\d\z\q\2\m\y\n\8\2\t\m\v\s\c\b\7\3\r\v\5\d\r\y\9\1\u\9\p\i\n\2\4\j\j\7\b\q\x\g\p\o\2\9\3\t\0\x\g\z\x\l\t\c\y\h\r\v\l\5\o\f\q\a\h\h\g\5\i\p\8\m\8\s\5\6\i\c\g\4\6\y\e\k\4\c\j\l\2\v\e\3\j\6\y\h\c\r\s\2\i\w\n\p\e\8\w\q\g\8\a\z\w\y\y\a\1\p\i\7\w\5\y\a\i\f\u\e\r\y\o\h\t\z\0\9\u\5\6\l\d\1\9\3\z\5\v\r\2\r\x\v\i\n\7\5\y\k\v\4\n\b\o\e\n\8\x\x\p\u\o\5\6\9\e\3\r\7\4\v\4\v\9\c\b\g\8\s\u\s\r\u\6\m\3\7\4\1\g\x\v\y\f\1\2\l\e\m\w\u\z\o\l\1\l\6\r\f\c\l\o\t\r\k\2\b\g\z\y\8\6\s\q\j\p\y\7\h\r\o\w\0\1\g\8\q\6\i\z\b\j\k\h\q\t\6\t\x\b\e\u\3\o\o\w\6\f\r\t\0\2\l\v\t\n\o\f\d\t\7\6\i\u\n\m\g\f\n\p\l\n\m\3\s\k\k\2\v\e\9\5\v\b\l\q\k\g\4\0\2\9\l\r\a\5\c\1\v\9\5\c\8\6\v\p\e\b\q\g\v\3\a\k\4\r\i\u\s\t\v\s\p\b\e\u\k\u\b\6\3\g\c\7\f\8\c\v\3\g\9\j\2\5\y\k\5\y\u\f\g\b\4\r\w\9\b\w\e\d\7\r\s\d\c\d\j\f\n\u\z\y\7\p\f\4\a\a\w\2\a\5\b\7\k\p\r\d\3\x\q\n\r\9\u\b\2\e\7\p\k\a\v\c\d\m\0\j\z\i\5\0\a\4\e\1\c\t\3\r\g\t\l\9\2\f\n\k\4\i\y\t\z\0\k\u\b\t\s\1\r\n\u\b\p\c\c\i\0\p\t\u\r\z\5\c\t\0\p\4\k\t\7\l\n\x\t\s\a\l\m\x\b\h\a\h\4\c\q\7\1\4\7\p\c\d\y\b\l\i\7\5\g\1\k\u\g\9\b\z\1\d\y\i\m\t\r\l\a\a\a\9\0\l\w\4\t\c\p\q\x\0\s\2\j\i\w\o\7\z\d\l\r\m\e\3\5\u\t\8\e\p\3\7\1\j\i\b\f\z\d\s\c\f\4\w\3\1\a\2\g\m\e\u\a\1\u\w\l\z\x\o\s\5\z\d\6\q\1\c\x\5\5\3\h\u\p\8\1\0\f\x\t\g\d\c\9\n\5\5\l\2\o\o\e\8\v\l\9\o\l\8\1\u\u\0\5\2\c\z\e\o\f\q\h\r\o\8\q\k\h\s\y\h\d\l\1\9\p\j\k\o\7\f\3\s\g\r\f\p\l\u\5\u\l\z\6\j\1\8\3\o\v\f\w\r\b\f\l\r\q\8\x\q\f\g\p\v\n\q\q\2\n\h\1\s\u\k\h\t\e\b\k\9\2\t\3\i\8\x\o\c\w\7\s\3\c\g\n\p\e\p\y\y\x\n\0\3\z\q\o\1\d\b\8\3\l\5\7\7\h\j\c\m\f\0\1\o\p\e\m\j\2\a\t\q\p\o\h\s\k\o\5\s\s\w\8\d\d\l\l\m\8\2\z\8\2\w\f\2\n\b\k\0\c\m\8\h\l\r\o\3\3\j\0\5\u\j\1\h\0\p\5\k\l\h\0\w\r\f\e\7\1\g\x\l\x\d\6\3\5\3\k\6\t\i\a\t\j\p\a\w\u\n\c\a\e\5\h\5\6\v\j\w\5\j\4\h\o\n\b\n\1\e\t\8\l\g\m\m\8\o\9\0\c\b\f\l\y\w\h\q\k\a\i\8\w\a\9\9\4\2\5\9\8\x\6\o\o\n\o\i\e\l\8\0\8\3\k\m\5\h\t\6\v\1\t\k\z\i\q\z\q\u\l\j\9\b\y\j\y\f\s\b\i\y\s\g\n\e\7\y\n\b\t\k\o\3\e\v\t\d\f\w\v\e\t\d\z\i\9\u\8\c\f\v\2\q\r\v\x\j\b\r\b\e\n\z\z\4\1\8\9\p\b\1\f\x\h\g\v\w\6\x\e\q\u\6\n\o\0\m\e\a\z\0\a\9\4\v\0\1\q\9\m\k\f\i\q\1\v\w\s\r\j\u\w\c\x\o\n\u\v\n\3\4\u\u\e\7\1\k\z\d\s\i\h\0\k\6\n\6\e\y\c\3\0\o\r\h\e\p\5\i\j\c\7\z\7\y\a\7\b\u\s\h\2\p\w\o\z\9\p\d\t\7\p\d\5\x\n\g\4\t\w\m\6\z\5\b\0\j\j\z\8\y\6\5\4\k\0\a\1\b\g\b\o\6\g\g\z\8\r\d\u\b\2\8\f\j\x\2\w\5\q\u\1\1\n\9\i\z\o\u\7\o\o\4\k\e\d\6\0\7\d\t\y\d\e\b\s\7\u\5\c\v\9\b\x\3\u\q\a\c\h\c\s\x\t\7\3\0\a\m\6\r\v\r\e\l\k\i\n\w\h\g\5\x\p\s\i\b\p\b\f\v\r\0\q\9\k\2\2\9\u\1\r\p\t\t\f\l\l\f\p\p\8\5\7\z\v\v\k\7\5\q\a\q\8\c\6\j\8\k\v\e\e\1\p\1\8\j\0\k\l\9\9\b\s\r\n\v\2\9\z\l\l\i\4\8\b\o\q\e\h\t\s\j\5\i\r\h\7\8\n\w\b\k\7\n\o\2\w\p\q\x\8\r\j\n\2\8\6\y\t\n\o\0\k\2\s\l\c\g\b\8\h\g\w\h\i\b\4\5\a\j\5\i\a\1\b\u\g\s\p\2\6\v\z\i\s\v\0\f\h\5\6\a\z\9\v\g\r\r\3\h\y\2\u\9\o\y\c\c\h\v\4\n\b\h\o\f\g\j\8\9\t\m\l\f\n\d\b\v\e\l\g\i\x\9\y\b\n\m\m\6\g\v\0\0\c\z\e\o\r\1\f\i\f\k\x\h\y\f\1\j\1\q\g\2\c\z\x\j\f\e\r\p\r\4\0\m\4\n\h\h\j\c\w\h\0\c\k\k\m\4\8\i\w\q\v\m\t\u\q\n\9\q\g\r\q\3\6\3\h\l\n\x\e\x\s\t\r\0\w\e\2\d\u\v\9\v\h\o\f\p\o\x\k\5\k\2\i\j\k\e\2\7\r\o\v\n\n\u\5\9\0\s\g\5\5\v\i\v\l\e\a\t\w\h\8\k\5\p\p\y\n\o\h\6\x\s\l\w\0\c\7\d\v\1\o\b\e\3\m\g\5\q\0\8\e\h\5\u\l\k\j\1\j\z\d\w\9\x\e\8\u\u\a\p\7\1\j\c\6\6\7\k\8\z\u\n\c\a\x\t\l\f\b\g\b\n\y\u\a\r\s\z\c\w\5\5\8\6\x\e\p\f\t\b\9\f\r\j\s\0\h\p\2\v\p\o\7\c\2\k\p\o\b\2\7\j\u\a\e\o\j\b\r\j\8\8\7\y\g\m\o\6\m\n\2\g\a\r\i\d\2\8\w\y\1\5\5\a\t\r\8\x\c\z\j\4\d\r\m\l\e\k\s\h\m\o\1\d\9\b\t\k\e\z\p\k\i\g\z\8\0\3\m\q\o\c\c\h\x\2\f\r\v\8\t\t\p\w\s\z\j\g\u\9\l\a\m\l\p\a\6\0\i\0\l\q\r\l\h\9\r\j\l\d\4\9\o\d\b\0\i\n\t\s\n\j\0\3\w\e\t\l\q\v\8\9\x\y\b\t\z\y\l\d\1\r\q\t\9\n\8\h\v\l\b\y\5\8\4\g\e\7\1\4\5\p\8\k\b\1\u\k\p\j\e\g\e\1\q\0\3\9\t\l\o\0\x\f\t\k\a\7\c\m\z\a\n\4\i\s\s\0\z\b\a\n\r\f\h\3\4\m\8\j\j\k\p\w\5\9\l\i\0\r\z\v\t\g\f\k\r\r\n\h\f\t\l\1\e\1\m\g\p\a\v\2\9\x\e\i\v\4\x\a\k\p\a\f\q\p\s\b\6\p\w\4\1\x\l\u\8\x\m\k\e\b\2\r\j\i\0\h\i\1\o\1\2\4\x\r\7\9\q\v\d\p\8\9\l\4\v\q\p\n\n\y\v\8\1\7\g\4\w\x\a\s\p\f\o\i\8\5\l\e\8\q\m\3\c\5\t\i\g\e\h\y\l\1\u\b\5\5\y\7\k\o\3\0\0\3\g\5\h\p\8\m\5\e\t\a\p\8\v\a\n\p\p\9\6\4\p\s\7\2\n\r\q\r\2\x\d\a\s\k\f\m\m\v\i\u\n\o\d\p\r\x\i\q\n\e\l\a\k\x\j\9\m\j\1\m\1\w\m\f\i\g\3\1\5\6\n\w\9\0\i\9\u\3\m\8\k\c\2\9\d\u\v\1\z\u\5\o\d\z\3\7\7\j\3\k\l\y\9\b\0\3\i\g\p\0\4\4\i\d\7\v\4\s\1\f\h\3\v\y\r\p\o\j\b\v\i\g\z\n\1\e\x\r\9\7\a\u\a\w\y\1\s\p\4\i\y\8\9\q\q\j\y\i\0\r\q\9\3\s\p\8\2\7\4\z\1\f\v\g\z\g\9\4\w\2\8\o\g\l\7\m\y\4\s\6\q\3\b\2\d\a\q\e\u\u\c\3\u\d\v\0\r\j\h\w\j\8\9\j\j\2\y\5\w\2\b\8\4\u\a\v\5\8\o\8\y\h\9\f\h\1\6\f\8\1\3\a\b\k\m\b\w\b\4\d\h\w\o\p\v\e\i\n\x\a\9\d\c\e\a\n\u\j\c\i\7\o\8\v\x\v\0\7\2\y\y\n\t\z\d\8\4\m\y\0\5\t\1\2\z\t\s\d\c\t\k\j\b\y\i\o\l\u\t\u\p\g\v\3\c\3\7\0\f\q\r\6\h\x\z\u\g\l\4\k\a\s\l\0\7\v\y\y\z\7\d\c\7\s\r\1\j\9\a\m\7\0\y\p\f\t\i\e\y\8\i\v\g\x\6\w\8\4\1\z\a\v\8\5\m\p\1\d\c\5\u\w\g\1\l\c\u\o\c\f\f\s\m\7\7\o\2\9\4\0\g\0\j\7\5\p\f\a\0\m\x\n\z\a\6\q\j\1\r\y\g\k\b\5\i\h\c\1\p\j\0\1\w\3\l\5\o\a\1\2\r\s\3\i\5\i\r\a\w\s\p\n\8\v\g\m\d\l\q\3\9\6\q\b\4\7\0\8\j\k\q\j\0\b\6\o\1\3\3\j\2\s\i\x\x\6\c\j\s\k\x\6\n\4\4\g\z\d\l\n\u\5\a\9\k\3\k\q\6\y\h\2\f\6\7\d\3\9\e\x\g\g\7\7\q\m\k\6\m\0\0\0\p\o\b\4\j\6\8\h\6\5\2\6\9\9\o\z\t\e\0\d\o\e\6\z\0\w\a\g\y\z\w\z\9\0\4\e\0\r\9\t\5\1\7\1\c\6\9\x\3\u\u\9\b\e\c\g\h\j\p\9\r\3\m\j\r\w\u\w\p\d\4\d\g\o\g\c\y\f\c\6\f\0\m\e\4\f\g\9\f\q\l\w\i\l\r\m\5\9\7\s\u\3\x\7\a\h\u\f\h\c\n\4\r\y\e\5\2\z\v\q\c\w\c\z\7\y\z\v\d\c\f\5\r\z\7\f\h\z\f\x\g\d\j\0\w\5\6\n\m\3\x\m\v\r\r\p\r\r\f\8\5\d\0\m\o\q\3\r\1\o\t\u\2\o\a\u\s\u\o\n\b\p\8\w\9\o\4\y\e\q\w\7\n\f\y\t\4\1\a\6\q\y\z\u\y\5\p\g\7\u\4\e\2\c\0\3\e\k\p\o\4\m\i\n\a\c\4\8\9\1\m\8\p\i\1\8\n\i\u\1\b\d\w\q\a\t\f\s\0\2\y\n\z\n\g\e\y\g\g\m\y\y\y\e\5\u\7\u\l\d\m\5\k\u\6\c\c\t\4\5\p\f\1\l\z\n\4\y\5\y\4\z\n\b\h\o\5\6\8\2\r\i\t\n\o\t\v\j\y\c\9\t\y\c\2\9\8\3\x\r\o\v\f\k\x\7\m\g\c\e\7\l\5\7\m\4\1\6\i\m\7\d\k\3\3\9\9\l\s\s\c\9\a\2\i\l\g\4\t\h\p\v\y\h\q\w\0\b\e\v\u\c\0\q\6\j\m\h\n\7\9\d\s\5\9\4\1\9\e\0\o\l\p\o\a\w\v\d\m\z\w\k\6\s\c\5\j\z\f\4\m\m\0\i\8\r\s\a\o\m\t\d\b\4\a\z\k\g\2\7\5\f\i\i\u\h\k\g\l\m\d\g\f\a\3\m\7\4\o\v\8\n\l\r\f\r\x\z\g\h\9\9\w\2\e\a\7\1\b\g\7\c\d\n\r\2\t\s\t\f\q\2\j\b\b\6\y\t\3\l\d\h\o\8\u\p\b\i\l\u\k\7\1\1\8\2\v\o\o\9\k\d\c\i\i\i\4\7\m\2\e\u\x\0\p\y\x\l\u\l\m\f\s\g\m\u\s\8\g\j\p\i\m\g\1\h\4\m\c\i\r\r\x\n\3\y\0\4\c\x\i\c\a\2\m\a\q\c\a\m\9\f\v\7\b\2\f\4\8\b\1\m\g\d\9\f\j\r\o\l\n\r\t\4\w\f\h\7\r\1\r\y\v\t\0\d\2\5\5\3\4\k\8\k\5\d\1\s\4\7\0\e\j\i\w\l\w\4\4\9\y\y\4\j\x\9\4\b\0\n\9\o\n\d\6\c\a\k\b\q\v\f\s\c\4\v\4\c\i\d\w\p\6\a\u\p\e\a\8\g\l\7\t\o\b\o\g\t\i\9\j\v\d\k\j\n\g\t\z\f\p\0\d\3\k\f\n\y\7\z\a\s\t\w\w\r\9\9\v\9\l\6\b\k\h\q\q\8\n\0\l\d\c\t\5\u\x\h\m\z\y\4\l\2\k\6\y\z\4\1\2\v\z\k\4\3\2\0\e\w\4\p\2\l\f\q\t\j\2\w\0\l\k\7\8\d\0\h\f\y\t\g\p\g\2\m\d\6\s\8\y\g\m\l\8\p\b\4\y\u\9\p\w\q\1\g\j\e\0\o\i\y\5\z\a\e\j\4\9\d\9\f\1\2\g\o\9\l\h\j\u\t\z\7\5\9\o\7\p\y\a\o\b\2\r\t\3\q\x\e\u\d\d\y\n\4\c\k\d\2\m\v\8\2\z\s\o\r\9\q\g\f\a\i\h\q\i\j\f\m\5\w\7\8\j\4\t\d\f\r\a\5\9\q\v\s\i\c\n\u\j\b\o\3\w\r\2\f\7\2\9\m\5\u\e\4\c\w\y\x\u\j\e\h\i\l\2\b\6\z\f\x\h\k\a\y\4\l\t\w\q\y\h\g\2\4\3\8\c\7\e\z\j\r\h\d\j\x\5\a\u\m\t\6\n\b\0\p\a\v\7\8\0\v\6\9\a\i\p\i\b\p\t\2\7\z\5\w\6\1\a\u\f\z\1\e\0\l\f\1\g\x\a\i\t\k\k\6\t\e\j\u\v\r\l\j\2\i\3\h\q\6\7\j\5\i\t\d\y\0\q\e\1\s\j\y\b\9\0\8\4\m\6\1\z\6\d\s\v\a\z\o\4\e\m\e\d\6\t\5\4\t\q\w\0\0\x\3\8\c\f\5\l\x\k\q\6\a\9\c\5\j\w\3\z\c\6\2\c\s\k\7\q\8\p\g\8\4\b\f\y\z\d\6\0\a\s\r\v\g\x\1\4\p\i\4\d\n\k\t\t\5\8\v\1\9\e\p\7\c\u\d\v\1\j\k\p\m\z\d\h\4\0\p\c\h\5\c\t\h\h\t\u\n\8\y\w\b\i\w\l\f\g\x\u\9\g\s\0\2\b\9\y\5\v\q\a\2\u\t\6\z\b\v\s\u\e\u\2\2\h\b\d\o\l\z\2\n\q\w\s\o\g\9\f\z\6\4\e\j\i\k\g\1\s\9\m\f\i\w\h\w\2\4\w\g\2\u\a\e\8\m\k\i\2\d\3\2\g\8\0\1\5\c\l\j\8\n\e\t\g\3\6\6\4\f\a\l\3\p\s\p\n\a\w\p\5\a\s\9\7\z\y\w\4\v\g\x\c\j\2\b\k\j\7\5\l\4\n\3\j\8\2\x\w\j\e\a\u\r\1\x\s\n\s\w\k\w\y\q\1\8\d\3\8\0\g\m\f\i\i\4\1\k\3\g\y\i\0\f\7\t\2\l\l\d\1\4\h\p\n\e\e\v\8\9\v\o\t\k\4\o\z\k\h\i\2\j\w\x\v\g\z\g\3\s\t\k\2\h\e\v\q\f\4\i\b\t\y\p\6\y\1\z\4\h\d\y\u\d\z\8\1\5\y\n\w\b\r\z\x\u\5\c\t\n\f\e\w\9\g\g\c\e\w\7\o\7\b\m\p\s\j\2\f\6\8\1\y\2\w\a\5\0\k\m\6\g\x\g\6\j\k\k\h\v\o\z\8\t\g\p\u\q\6\o\0\2\7\p\9\j\j\x\2\z\5\9\g\u\d\e\y\p\6\6\f\q\8\j\6\5\q\1\4\8\h\t\u\l\d\g\p ]] 00:33:22.100 00:33:22.100 real 0m1.386s 00:33:22.100 user 0m0.763s 00:33:22.100 sys 0m0.429s 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:33:22.100 ************************************ 00:33:22.100 END TEST dd_rw_offset 00:33:22.100 ************************************ 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:33:22.100 15:27:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:33:22.365 { 00:33:22.365 "subsystems": [ 00:33:22.365 { 00:33:22.365 "subsystem": "bdev", 00:33:22.365 "config": [ 00:33:22.365 { 00:33:22.365 "params": { 00:33:22.365 "trtype": "pcie", 00:33:22.365 "traddr": "0000:00:10.0", 00:33:22.365 "name": "Nvme0" 00:33:22.365 }, 00:33:22.365 "method": "bdev_nvme_attach_controller" 00:33:22.365 }, 00:33:22.365 { 00:33:22.365 "method": "bdev_wait_for_examine" 00:33:22.365 } 00:33:22.365 ] 00:33:22.365 } 00:33:22.365 ] 00:33:22.365 } 00:33:22.365 [2024-07-23 15:27:17.588174] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:22.365 [2024-07-23 15:27:17.588370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126110 ] 00:33:22.365 [2024-07-23 15:27:17.737503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.365 [2024-07-23 15:27:17.782853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.882  Copying: 1024/1024 [kB] (average 1000 MBps) 00:33:22.882 00:33:22.882 15:27:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:22.882 00:33:22.882 real 0m18.449s 00:33:22.882 user 0m10.793s 00:33:22.882 sys 0m5.562s 00:33:22.882 15:27:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:22.882 15:27:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:33:22.882 ************************************ 00:33:22.882 END TEST spdk_dd_basic_rw 00:33:22.882 ************************************ 00:33:22.882 15:27:18 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:33:22.882 15:27:18 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:33:22.882 15:27:18 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:22.882 15:27:18 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:22.882 15:27:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:33:22.882 ************************************ 00:33:22.882 START TEST spdk_dd_posix 00:33:22.882 ************************************ 00:33:22.882 15:27:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:33:23.141 * Looking for test storage... 00:33:23.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:33:23.141 15:27:18 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:23.141 15:27:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.141 15:27:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.141 15:27:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.141 15:27:18 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.141 15:27:18 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.141 15:27:18 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # export PATH 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:33:23.142 * First test run, liburing in use 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:23.142 ************************************ 00:33:23.142 START TEST dd_flag_append 00:33:23.142 ************************************ 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=3del3qnxiwnvd2rvxav8tcu1hm30xe96 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ik3bd8y2e96m9fclpezm5vwdh9mhzgli 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 3del3qnxiwnvd2rvxav8tcu1hm30xe96 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ik3bd8y2e96m9fclpezm5vwdh9mhzgli 00:33:23.142 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:33:23.142 [2024-07-23 15:27:18.431485] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:23.142 [2024-07-23 15:27:18.431686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126170 ] 00:33:23.401 [2024-07-23 15:27:18.581488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.401 [2024-07-23 15:27:18.629160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.659  Copying: 32/32 [B] (average 31 kBps) 00:33:23.659 00:33:23.659 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ik3bd8y2e96m9fclpezm5vwdh9mhzgli3del3qnxiwnvd2rvxav8tcu1hm30xe96 == \i\k\3\b\d\8\y\2\e\9\6\m\9\f\c\l\p\e\z\m\5\v\w\d\h\9\m\h\z\g\l\i\3\d\e\l\3\q\n\x\i\w\n\v\d\2\r\v\x\a\v\8\t\c\u\1\h\m\3\0\x\e\9\6 ]] 00:33:23.659 00:33:23.659 real 0m0.578s 00:33:23.659 user 0m0.273s 00:33:23.659 sys 0m0.188s 00:33:23.659 ************************************ 00:33:23.659 END TEST dd_flag_append 00:33:23.659 ************************************ 00:33:23.659 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:23.659 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:33:23.659 15:27:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:23.659 15:27:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:33:23.659 15:27:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:23.660 ************************************ 00:33:23.660 START TEST dd_flag_directory 00:33:23.660 ************************************ 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:23.660 15:27:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:23.660 [2024-07-23 15:27:19.064430] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:23.660 [2024-07-23 15:27:19.064634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126197 ] 00:33:23.918 [2024-07-23 15:27:19.213706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.918 [2024-07-23 15:27:19.261793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.918 [2024-07-23 15:27:19.327651] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:33:23.918 [2024-07-23 15:27:19.327725] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:33:23.918 [2024-07-23 15:27:19.327747] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:24.177 [2024-07-23 15:27:19.433947] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:24.177 15:27:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:33:24.437 [2024-07-23 15:27:19.633437] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:24.437 [2024-07-23 15:27:19.633840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126208 ] 00:33:24.437 [2024-07-23 15:27:19.782761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.437 [2024-07-23 15:27:19.827527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.695 [2024-07-23 15:27:19.892788] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:33:24.695 [2024-07-23 15:27:19.893113] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:33:24.695 [2024-07-23 15:27:19.893144] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:24.695 [2024-07-23 15:27:19.999200] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:24.695 ************************************ 00:33:24.695 END TEST dd_flag_directory 00:33:24.695 ************************************ 00:33:24.695 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:33:24.695 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:24.695 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:33:24.695 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:33:24.695 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:33:24.695 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:24.695 00:33:24.695 real 0m1.124s 00:33:24.695 user 0m0.563s 00:33:24.695 sys 0m0.360s 00:33:24.695 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:24.695 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:24.954 ************************************ 00:33:24.954 START TEST dd_flag_nofollow 00:33:24.954 ************************************ 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:24.954 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:24.954 [2024-07-23 15:27:20.262711] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:24.954 [2024-07-23 15:27:20.262986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126237 ] 00:33:25.213 [2024-07-23 15:27:20.415322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.213 [2024-07-23 15:27:20.460055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.213 [2024-07-23 15:27:20.525451] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:33:25.213 [2024-07-23 15:27:20.525525] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:33:25.213 [2024-07-23 15:27:20.525549] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:25.213 [2024-07-23 15:27:20.631326] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:25.472 15:27:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:33:25.472 [2024-07-23 15:27:20.831876] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:25.472 [2024-07-23 15:27:20.832110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126248 ] 00:33:25.732 [2024-07-23 15:27:20.983854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.732 [2024-07-23 15:27:21.031556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.732 [2024-07-23 15:27:21.096968] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:33:25.732 [2024-07-23 15:27:21.097051] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:33:25.732 [2024-07-23 15:27:21.097084] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:25.990 [2024-07-23 15:27:21.202961] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:33:25.990 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:25.990 [2024-07-23 15:27:21.410912] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:25.991 [2024-07-23 15:27:21.411107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126261 ] 00:33:26.249 [2024-07-23 15:27:21.567677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.249 [2024-07-23 15:27:21.620875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.508  Copying: 512/512 [B] (average 500 kBps) 00:33:26.508 00:33:26.767 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ zv2x1opyn2my0g3q3qih18ryfcbzmjw74s788alcvubas5lq8eriu89id9sa0rq0s5f2oilvxdidibq0tdgewmam8z5x8bmvhy86m8g5cynny8vzo6noqk6yxhmatt25skw6zoqdaxcz70hnc4bhw6nnsq4igetgi7pyxteboobk0kk0of59hsb9dm1i043gfkbmp3nn57kc0gwwaqusg60ilkzjp6x2hk83chuupzhuzs0t629i0450hkkcq7x26r6pal6q0klonk8o1glaabw06i6ezfixz1x6bvt4i8hbwzd6jbuv35ir5y0puvzubosvxfcr7910gxrddl8cacp801mjopoq4xlndcr34c431s1eufbxzuf7l1vfufen0fl44x54imc6gnanziaix6rmvwn9551om1dqedwn3tkuay2t9rylma00q31va484eczspn08a0eg0lebdl9p98l5or1gd72l42qodnb6iqs1avgcjbn2k6whslmhgqi6 == \z\v\2\x\1\o\p\y\n\2\m\y\0\g\3\q\3\q\i\h\1\8\r\y\f\c\b\z\m\j\w\7\4\s\7\8\8\a\l\c\v\u\b\a\s\5\l\q\8\e\r\i\u\8\9\i\d\9\s\a\0\r\q\0\s\5\f\2\o\i\l\v\x\d\i\d\i\b\q\0\t\d\g\e\w\m\a\m\8\z\5\x\8\b\m\v\h\y\8\6\m\8\g\5\c\y\n\n\y\8\v\z\o\6\n\o\q\k\6\y\x\h\m\a\t\t\2\5\s\k\w\6\z\o\q\d\a\x\c\z\7\0\h\n\c\4\b\h\w\6\n\n\s\q\4\i\g\e\t\g\i\7\p\y\x\t\e\b\o\o\b\k\0\k\k\0\o\f\5\9\h\s\b\9\d\m\1\i\0\4\3\g\f\k\b\m\p\3\n\n\5\7\k\c\0\g\w\w\a\q\u\s\g\6\0\i\l\k\z\j\p\6\x\2\h\k\8\3\c\h\u\u\p\z\h\u\z\s\0\t\6\2\9\i\0\4\5\0\h\k\k\c\q\7\x\2\6\r\6\p\a\l\6\q\0\k\l\o\n\k\8\o\1\g\l\a\a\b\w\0\6\i\6\e\z\f\i\x\z\1\x\6\b\v\t\4\i\8\h\b\w\z\d\6\j\b\u\v\3\5\i\r\5\y\0\p\u\v\z\u\b\o\s\v\x\f\c\r\7\9\1\0\g\x\r\d\d\l\8\c\a\c\p\8\0\1\m\j\o\p\o\q\4\x\l\n\d\c\r\3\4\c\4\3\1\s\1\e\u\f\b\x\z\u\f\7\l\1\v\f\u\f\e\n\0\f\l\4\4\x\5\4\i\m\c\6\g\n\a\n\z\i\a\i\x\6\r\m\v\w\n\9\5\5\1\o\m\1\d\q\e\d\w\n\3\t\k\u\a\y\2\t\9\r\y\l\m\a\0\0\q\3\1\v\a\4\8\4\e\c\z\s\p\n\0\8\a\0\e\g\0\l\e\b\d\l\9\p\9\8\l\5\o\r\1\g\d\7\2\l\4\2\q\o\d\n\b\6\i\q\s\1\a\v\g\c\j\b\n\2\k\6\w\h\s\l\m\h\g\q\i\6 ]] 00:33:26.767 00:33:26.767 real 0m1.775s 00:33:26.767 user 0m0.842s 00:33:26.767 sys 0m0.614s 00:33:26.767 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.767 15:27:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:33:26.767 ************************************ 00:33:26.767 END TEST dd_flag_nofollow 00:33:26.767 ************************************ 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:26.767 ************************************ 00:33:26.767 START TEST dd_flag_noatime 00:33:26.767 ************************************ 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721748441 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721748441 00:33:26.767 15:27:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:33:27.701 15:27:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:27.701 [2024-07-23 15:27:23.089147] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:27.701 [2024-07-23 15:27:23.090219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126298 ] 00:33:27.960 [2024-07-23 15:27:23.255328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.960 [2024-07-23 15:27:23.309682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.218  Copying: 512/512 [B] (average 500 kBps) 00:33:28.218 00:33:28.218 15:27:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:28.218 15:27:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721748441 )) 00:33:28.218 15:27:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:28.477 15:27:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721748441 )) 00:33:28.477 15:27:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:28.477 [2024-07-23 15:27:23.722104] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:28.477 [2024-07-23 15:27:23.722319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126311 ] 00:33:28.477 [2024-07-23 15:27:23.874965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.736 [2024-07-23 15:27:23.922160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.996  Copying: 512/512 [B] (average 500 kBps) 00:33:28.996 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721748443 )) 00:33:28.996 00:33:28.996 real 0m2.232s 00:33:28.996 user 0m0.573s 00:33:28.996 sys 0m0.424s 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:33:28.996 ************************************ 00:33:28.996 END TEST dd_flag_noatime 00:33:28.996 ************************************ 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:28.996 ************************************ 00:33:28.996 START TEST dd_flags_misc 00:33:28.996 ************************************ 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:28.996 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:33:28.996 [2024-07-23 15:27:24.360600] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:28.996 [2024-07-23 15:27:24.360744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126338 ] 00:33:29.256 [2024-07-23 15:27:24.495489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.256 [2024-07-23 15:27:24.544077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.515  Copying: 512/512 [B] (average 500 kBps) 00:33:29.515 00:33:29.515 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qv1u1jzg643a4g2ft0ska14pyrd8t5d71alwpv6gpntzl149z0flqs0qfqaipb7w72etdk01dinbiupvm3p5bkzrhnyyd16mccjh444nh4fplmzyagwo5oot9dgr87zsnt275xfpnrs7lapa1545jshhylj0wbeahtdnb8mw9wqkyer95v5ecx8ugynbs3vpjhux65sr45qg3ztoj7fcbe1wp2oysojbbbmwna1x08849uj3a85z1ayzpg4m68l0jtypsqc6o9kmuhjfvs85rguftwhhyqflwi7d4whkpykhzd4ua5g73gmu9zfo5wfftzl83zb5lxjo7d0xwnv7t3pk82lpld1poqojuq1bses3frob0e9n2z5d7o2h59agcl6y6hqv5fn3vug2kxcrkrdwrmfsl0ufwcmd88lw7504cozyt8aq2vdhee20xlrje0u052znyrbb2ctx10c4c75anjjzbadnqos4zei5d6pk2kdjlnywf99x5a3v3ju4 == \q\v\1\u\1\j\z\g\6\4\3\a\4\g\2\f\t\0\s\k\a\1\4\p\y\r\d\8\t\5\d\7\1\a\l\w\p\v\6\g\p\n\t\z\l\1\4\9\z\0\f\l\q\s\0\q\f\q\a\i\p\b\7\w\7\2\e\t\d\k\0\1\d\i\n\b\i\u\p\v\m\3\p\5\b\k\z\r\h\n\y\y\d\1\6\m\c\c\j\h\4\4\4\n\h\4\f\p\l\m\z\y\a\g\w\o\5\o\o\t\9\d\g\r\8\7\z\s\n\t\2\7\5\x\f\p\n\r\s\7\l\a\p\a\1\5\4\5\j\s\h\h\y\l\j\0\w\b\e\a\h\t\d\n\b\8\m\w\9\w\q\k\y\e\r\9\5\v\5\e\c\x\8\u\g\y\n\b\s\3\v\p\j\h\u\x\6\5\s\r\4\5\q\g\3\z\t\o\j\7\f\c\b\e\1\w\p\2\o\y\s\o\j\b\b\b\m\w\n\a\1\x\0\8\8\4\9\u\j\3\a\8\5\z\1\a\y\z\p\g\4\m\6\8\l\0\j\t\y\p\s\q\c\6\o\9\k\m\u\h\j\f\v\s\8\5\r\g\u\f\t\w\h\h\y\q\f\l\w\i\7\d\4\w\h\k\p\y\k\h\z\d\4\u\a\5\g\7\3\g\m\u\9\z\f\o\5\w\f\f\t\z\l\8\3\z\b\5\l\x\j\o\7\d\0\x\w\n\v\7\t\3\p\k\8\2\l\p\l\d\1\p\o\q\o\j\u\q\1\b\s\e\s\3\f\r\o\b\0\e\9\n\2\z\5\d\7\o\2\h\5\9\a\g\c\l\6\y\6\h\q\v\5\f\n\3\v\u\g\2\k\x\c\r\k\r\d\w\r\m\f\s\l\0\u\f\w\c\m\d\8\8\l\w\7\5\0\4\c\o\z\y\t\8\a\q\2\v\d\h\e\e\2\0\x\l\r\j\e\0\u\0\5\2\z\n\y\r\b\b\2\c\t\x\1\0\c\4\c\7\5\a\n\j\j\z\b\a\d\n\q\o\s\4\z\e\i\5\d\6\p\k\2\k\d\j\l\n\y\w\f\9\9\x\5\a\3\v\3\j\u\4 ]] 00:33:29.515 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:29.515 15:27:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:33:29.515 [2024-07-23 15:27:24.930759] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:29.515 [2024-07-23 15:27:24.930967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126347 ] 00:33:29.774 [2024-07-23 15:27:25.076755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.774 [2024-07-23 15:27:25.123546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.033  Copying: 512/512 [B] (average 500 kBps) 00:33:30.033 00:33:30.033 15:27:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qv1u1jzg643a4g2ft0ska14pyrd8t5d71alwpv6gpntzl149z0flqs0qfqaipb7w72etdk01dinbiupvm3p5bkzrhnyyd16mccjh444nh4fplmzyagwo5oot9dgr87zsnt275xfpnrs7lapa1545jshhylj0wbeahtdnb8mw9wqkyer95v5ecx8ugynbs3vpjhux65sr45qg3ztoj7fcbe1wp2oysojbbbmwna1x08849uj3a85z1ayzpg4m68l0jtypsqc6o9kmuhjfvs85rguftwhhyqflwi7d4whkpykhzd4ua5g73gmu9zfo5wfftzl83zb5lxjo7d0xwnv7t3pk82lpld1poqojuq1bses3frob0e9n2z5d7o2h59agcl6y6hqv5fn3vug2kxcrkrdwrmfsl0ufwcmd88lw7504cozyt8aq2vdhee20xlrje0u052znyrbb2ctx10c4c75anjjzbadnqos4zei5d6pk2kdjlnywf99x5a3v3ju4 == \q\v\1\u\1\j\z\g\6\4\3\a\4\g\2\f\t\0\s\k\a\1\4\p\y\r\d\8\t\5\d\7\1\a\l\w\p\v\6\g\p\n\t\z\l\1\4\9\z\0\f\l\q\s\0\q\f\q\a\i\p\b\7\w\7\2\e\t\d\k\0\1\d\i\n\b\i\u\p\v\m\3\p\5\b\k\z\r\h\n\y\y\d\1\6\m\c\c\j\h\4\4\4\n\h\4\f\p\l\m\z\y\a\g\w\o\5\o\o\t\9\d\g\r\8\7\z\s\n\t\2\7\5\x\f\p\n\r\s\7\l\a\p\a\1\5\4\5\j\s\h\h\y\l\j\0\w\b\e\a\h\t\d\n\b\8\m\w\9\w\q\k\y\e\r\9\5\v\5\e\c\x\8\u\g\y\n\b\s\3\v\p\j\h\u\x\6\5\s\r\4\5\q\g\3\z\t\o\j\7\f\c\b\e\1\w\p\2\o\y\s\o\j\b\b\b\m\w\n\a\1\x\0\8\8\4\9\u\j\3\a\8\5\z\1\a\y\z\p\g\4\m\6\8\l\0\j\t\y\p\s\q\c\6\o\9\k\m\u\h\j\f\v\s\8\5\r\g\u\f\t\w\h\h\y\q\f\l\w\i\7\d\4\w\h\k\p\y\k\h\z\d\4\u\a\5\g\7\3\g\m\u\9\z\f\o\5\w\f\f\t\z\l\8\3\z\b\5\l\x\j\o\7\d\0\x\w\n\v\7\t\3\p\k\8\2\l\p\l\d\1\p\o\q\o\j\u\q\1\b\s\e\s\3\f\r\o\b\0\e\9\n\2\z\5\d\7\o\2\h\5\9\a\g\c\l\6\y\6\h\q\v\5\f\n\3\v\u\g\2\k\x\c\r\k\r\d\w\r\m\f\s\l\0\u\f\w\c\m\d\8\8\l\w\7\5\0\4\c\o\z\y\t\8\a\q\2\v\d\h\e\e\2\0\x\l\r\j\e\0\u\0\5\2\z\n\y\r\b\b\2\c\t\x\1\0\c\4\c\7\5\a\n\j\j\z\b\a\d\n\q\o\s\4\z\e\i\5\d\6\p\k\2\k\d\j\l\n\y\w\f\9\9\x\5\a\3\v\3\j\u\4 ]] 00:33:30.033 15:27:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:30.033 15:27:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:33:30.291 [2024-07-23 15:27:25.487727] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:30.291 [2024-07-23 15:27:25.487890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126361 ] 00:33:30.291 [2024-07-23 15:27:25.624308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.291 [2024-07-23 15:27:25.669209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.550  Copying: 512/512 [B] (average 83 kBps) 00:33:30.550 00:33:30.809 15:27:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qv1u1jzg643a4g2ft0ska14pyrd8t5d71alwpv6gpntzl149z0flqs0qfqaipb7w72etdk01dinbiupvm3p5bkzrhnyyd16mccjh444nh4fplmzyagwo5oot9dgr87zsnt275xfpnrs7lapa1545jshhylj0wbeahtdnb8mw9wqkyer95v5ecx8ugynbs3vpjhux65sr45qg3ztoj7fcbe1wp2oysojbbbmwna1x08849uj3a85z1ayzpg4m68l0jtypsqc6o9kmuhjfvs85rguftwhhyqflwi7d4whkpykhzd4ua5g73gmu9zfo5wfftzl83zb5lxjo7d0xwnv7t3pk82lpld1poqojuq1bses3frob0e9n2z5d7o2h59agcl6y6hqv5fn3vug2kxcrkrdwrmfsl0ufwcmd88lw7504cozyt8aq2vdhee20xlrje0u052znyrbb2ctx10c4c75anjjzbadnqos4zei5d6pk2kdjlnywf99x5a3v3ju4 == \q\v\1\u\1\j\z\g\6\4\3\a\4\g\2\f\t\0\s\k\a\1\4\p\y\r\d\8\t\5\d\7\1\a\l\w\p\v\6\g\p\n\t\z\l\1\4\9\z\0\f\l\q\s\0\q\f\q\a\i\p\b\7\w\7\2\e\t\d\k\0\1\d\i\n\b\i\u\p\v\m\3\p\5\b\k\z\r\h\n\y\y\d\1\6\m\c\c\j\h\4\4\4\n\h\4\f\p\l\m\z\y\a\g\w\o\5\o\o\t\9\d\g\r\8\7\z\s\n\t\2\7\5\x\f\p\n\r\s\7\l\a\p\a\1\5\4\5\j\s\h\h\y\l\j\0\w\b\e\a\h\t\d\n\b\8\m\w\9\w\q\k\y\e\r\9\5\v\5\e\c\x\8\u\g\y\n\b\s\3\v\p\j\h\u\x\6\5\s\r\4\5\q\g\3\z\t\o\j\7\f\c\b\e\1\w\p\2\o\y\s\o\j\b\b\b\m\w\n\a\1\x\0\8\8\4\9\u\j\3\a\8\5\z\1\a\y\z\p\g\4\m\6\8\l\0\j\t\y\p\s\q\c\6\o\9\k\m\u\h\j\f\v\s\8\5\r\g\u\f\t\w\h\h\y\q\f\l\w\i\7\d\4\w\h\k\p\y\k\h\z\d\4\u\a\5\g\7\3\g\m\u\9\z\f\o\5\w\f\f\t\z\l\8\3\z\b\5\l\x\j\o\7\d\0\x\w\n\v\7\t\3\p\k\8\2\l\p\l\d\1\p\o\q\o\j\u\q\1\b\s\e\s\3\f\r\o\b\0\e\9\n\2\z\5\d\7\o\2\h\5\9\a\g\c\l\6\y\6\h\q\v\5\f\n\3\v\u\g\2\k\x\c\r\k\r\d\w\r\m\f\s\l\0\u\f\w\c\m\d\8\8\l\w\7\5\0\4\c\o\z\y\t\8\a\q\2\v\d\h\e\e\2\0\x\l\r\j\e\0\u\0\5\2\z\n\y\r\b\b\2\c\t\x\1\0\c\4\c\7\5\a\n\j\j\z\b\a\d\n\q\o\s\4\z\e\i\5\d\6\p\k\2\k\d\j\l\n\y\w\f\9\9\x\5\a\3\v\3\j\u\4 ]] 00:33:30.809 15:27:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:30.809 15:27:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:33:30.809 [2024-07-23 15:27:26.063455] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:30.809 [2024-07-23 15:27:26.063678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126364 ] 00:33:30.809 [2024-07-23 15:27:26.215771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.067 [2024-07-23 15:27:26.264108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.326  Copying: 512/512 [B] (average 125 kBps) 00:33:31.326 00:33:31.326 15:27:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qv1u1jzg643a4g2ft0ska14pyrd8t5d71alwpv6gpntzl149z0flqs0qfqaipb7w72etdk01dinbiupvm3p5bkzrhnyyd16mccjh444nh4fplmzyagwo5oot9dgr87zsnt275xfpnrs7lapa1545jshhylj0wbeahtdnb8mw9wqkyer95v5ecx8ugynbs3vpjhux65sr45qg3ztoj7fcbe1wp2oysojbbbmwna1x08849uj3a85z1ayzpg4m68l0jtypsqc6o9kmuhjfvs85rguftwhhyqflwi7d4whkpykhzd4ua5g73gmu9zfo5wfftzl83zb5lxjo7d0xwnv7t3pk82lpld1poqojuq1bses3frob0e9n2z5d7o2h59agcl6y6hqv5fn3vug2kxcrkrdwrmfsl0ufwcmd88lw7504cozyt8aq2vdhee20xlrje0u052znyrbb2ctx10c4c75anjjzbadnqos4zei5d6pk2kdjlnywf99x5a3v3ju4 == \q\v\1\u\1\j\z\g\6\4\3\a\4\g\2\f\t\0\s\k\a\1\4\p\y\r\d\8\t\5\d\7\1\a\l\w\p\v\6\g\p\n\t\z\l\1\4\9\z\0\f\l\q\s\0\q\f\q\a\i\p\b\7\w\7\2\e\t\d\k\0\1\d\i\n\b\i\u\p\v\m\3\p\5\b\k\z\r\h\n\y\y\d\1\6\m\c\c\j\h\4\4\4\n\h\4\f\p\l\m\z\y\a\g\w\o\5\o\o\t\9\d\g\r\8\7\z\s\n\t\2\7\5\x\f\p\n\r\s\7\l\a\p\a\1\5\4\5\j\s\h\h\y\l\j\0\w\b\e\a\h\t\d\n\b\8\m\w\9\w\q\k\y\e\r\9\5\v\5\e\c\x\8\u\g\y\n\b\s\3\v\p\j\h\u\x\6\5\s\r\4\5\q\g\3\z\t\o\j\7\f\c\b\e\1\w\p\2\o\y\s\o\j\b\b\b\m\w\n\a\1\x\0\8\8\4\9\u\j\3\a\8\5\z\1\a\y\z\p\g\4\m\6\8\l\0\j\t\y\p\s\q\c\6\o\9\k\m\u\h\j\f\v\s\8\5\r\g\u\f\t\w\h\h\y\q\f\l\w\i\7\d\4\w\h\k\p\y\k\h\z\d\4\u\a\5\g\7\3\g\m\u\9\z\f\o\5\w\f\f\t\z\l\8\3\z\b\5\l\x\j\o\7\d\0\x\w\n\v\7\t\3\p\k\8\2\l\p\l\d\1\p\o\q\o\j\u\q\1\b\s\e\s\3\f\r\o\b\0\e\9\n\2\z\5\d\7\o\2\h\5\9\a\g\c\l\6\y\6\h\q\v\5\f\n\3\v\u\g\2\k\x\c\r\k\r\d\w\r\m\f\s\l\0\u\f\w\c\m\d\8\8\l\w\7\5\0\4\c\o\z\y\t\8\a\q\2\v\d\h\e\e\2\0\x\l\r\j\e\0\u\0\5\2\z\n\y\r\b\b\2\c\t\x\1\0\c\4\c\7\5\a\n\j\j\z\b\a\d\n\q\o\s\4\z\e\i\5\d\6\p\k\2\k\d\j\l\n\y\w\f\9\9\x\5\a\3\v\3\j\u\4 ]] 00:33:31.326 15:27:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:33:31.326 15:27:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:33:31.326 15:27:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:33:31.326 15:27:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:33:31.326 15:27:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:31.326 15:27:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:33:31.326 [2024-07-23 15:27:26.670935] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:31.326 [2024-07-23 15:27:26.671123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126378 ] 00:33:31.585 [2024-07-23 15:27:26.822349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.585 [2024-07-23 15:27:26.867738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.844  Copying: 512/512 [B] (average 500 kBps) 00:33:31.844 00:33:31.844 15:27:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s8i81owbn956d4l70bikqxf6iwbj65j1ske5yh1q1l1mh0i647oilyvckldjzgjxf3na8a2li8cu7oxfqx6ze2dqufui0ubz1l7qaaxxltirpi4nm080udrdd37hppse9cyngyfrx9j2d6k2qovp1bxtjeqs0qayi0q0b196le383a3fw728fn52g64jwnvckxv6m0l2srtmv39l2hmc56lcpm8py0uh6fbsaguml1z8feec9mee7rvjhrcj3u37nd0ntmbfqcipmmqntiqtu72ke4t24xs500sl9532y2in38k99qdrf0byzwm1viyltrxxwohfiutu319fdfi7oz1xo4amzy1qdqhk859v85oy7ogvp2974issh5f7n29y62knes4s2cersv1aokny0or1bc2d5h7o2xh8o6ktnc7lx773mtgpvn3h9f70078ju1lwjmossl8bz87ziple7qbpn3jxru9js8y63jqjo3w2cb1ashv75tfimqq9w6c3 == \s\8\i\8\1\o\w\b\n\9\5\6\d\4\l\7\0\b\i\k\q\x\f\6\i\w\b\j\6\5\j\1\s\k\e\5\y\h\1\q\1\l\1\m\h\0\i\6\4\7\o\i\l\y\v\c\k\l\d\j\z\g\j\x\f\3\n\a\8\a\2\l\i\8\c\u\7\o\x\f\q\x\6\z\e\2\d\q\u\f\u\i\0\u\b\z\1\l\7\q\a\a\x\x\l\t\i\r\p\i\4\n\m\0\8\0\u\d\r\d\d\3\7\h\p\p\s\e\9\c\y\n\g\y\f\r\x\9\j\2\d\6\k\2\q\o\v\p\1\b\x\t\j\e\q\s\0\q\a\y\i\0\q\0\b\1\9\6\l\e\3\8\3\a\3\f\w\7\2\8\f\n\5\2\g\6\4\j\w\n\v\c\k\x\v\6\m\0\l\2\s\r\t\m\v\3\9\l\2\h\m\c\5\6\l\c\p\m\8\p\y\0\u\h\6\f\b\s\a\g\u\m\l\1\z\8\f\e\e\c\9\m\e\e\7\r\v\j\h\r\c\j\3\u\3\7\n\d\0\n\t\m\b\f\q\c\i\p\m\m\q\n\t\i\q\t\u\7\2\k\e\4\t\2\4\x\s\5\0\0\s\l\9\5\3\2\y\2\i\n\3\8\k\9\9\q\d\r\f\0\b\y\z\w\m\1\v\i\y\l\t\r\x\x\w\o\h\f\i\u\t\u\3\1\9\f\d\f\i\7\o\z\1\x\o\4\a\m\z\y\1\q\d\q\h\k\8\5\9\v\8\5\o\y\7\o\g\v\p\2\9\7\4\i\s\s\h\5\f\7\n\2\9\y\6\2\k\n\e\s\4\s\2\c\e\r\s\v\1\a\o\k\n\y\0\o\r\1\b\c\2\d\5\h\7\o\2\x\h\8\o\6\k\t\n\c\7\l\x\7\7\3\m\t\g\p\v\n\3\h\9\f\7\0\0\7\8\j\u\1\l\w\j\m\o\s\s\l\8\b\z\8\7\z\i\p\l\e\7\q\b\p\n\3\j\x\r\u\9\j\s\8\y\6\3\j\q\j\o\3\w\2\c\b\1\a\s\h\v\7\5\t\f\i\m\q\q\9\w\6\c\3 ]] 00:33:31.844 15:27:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:31.844 15:27:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:33:31.844 [2024-07-23 15:27:27.254671] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:31.844 [2024-07-23 15:27:27.254907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126381 ] 00:33:32.103 [2024-07-23 15:27:27.406618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.103 [2024-07-23 15:27:27.452466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.362  Copying: 512/512 [B] (average 500 kBps) 00:33:32.362 00:33:32.362 15:27:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s8i81owbn956d4l70bikqxf6iwbj65j1ske5yh1q1l1mh0i647oilyvckldjzgjxf3na8a2li8cu7oxfqx6ze2dqufui0ubz1l7qaaxxltirpi4nm080udrdd37hppse9cyngyfrx9j2d6k2qovp1bxtjeqs0qayi0q0b196le383a3fw728fn52g64jwnvckxv6m0l2srtmv39l2hmc56lcpm8py0uh6fbsaguml1z8feec9mee7rvjhrcj3u37nd0ntmbfqcipmmqntiqtu72ke4t24xs500sl9532y2in38k99qdrf0byzwm1viyltrxxwohfiutu319fdfi7oz1xo4amzy1qdqhk859v85oy7ogvp2974issh5f7n29y62knes4s2cersv1aokny0or1bc2d5h7o2xh8o6ktnc7lx773mtgpvn3h9f70078ju1lwjmossl8bz87ziple7qbpn3jxru9js8y63jqjo3w2cb1ashv75tfimqq9w6c3 == \s\8\i\8\1\o\w\b\n\9\5\6\d\4\l\7\0\b\i\k\q\x\f\6\i\w\b\j\6\5\j\1\s\k\e\5\y\h\1\q\1\l\1\m\h\0\i\6\4\7\o\i\l\y\v\c\k\l\d\j\z\g\j\x\f\3\n\a\8\a\2\l\i\8\c\u\7\o\x\f\q\x\6\z\e\2\d\q\u\f\u\i\0\u\b\z\1\l\7\q\a\a\x\x\l\t\i\r\p\i\4\n\m\0\8\0\u\d\r\d\d\3\7\h\p\p\s\e\9\c\y\n\g\y\f\r\x\9\j\2\d\6\k\2\q\o\v\p\1\b\x\t\j\e\q\s\0\q\a\y\i\0\q\0\b\1\9\6\l\e\3\8\3\a\3\f\w\7\2\8\f\n\5\2\g\6\4\j\w\n\v\c\k\x\v\6\m\0\l\2\s\r\t\m\v\3\9\l\2\h\m\c\5\6\l\c\p\m\8\p\y\0\u\h\6\f\b\s\a\g\u\m\l\1\z\8\f\e\e\c\9\m\e\e\7\r\v\j\h\r\c\j\3\u\3\7\n\d\0\n\t\m\b\f\q\c\i\p\m\m\q\n\t\i\q\t\u\7\2\k\e\4\t\2\4\x\s\5\0\0\s\l\9\5\3\2\y\2\i\n\3\8\k\9\9\q\d\r\f\0\b\y\z\w\m\1\v\i\y\l\t\r\x\x\w\o\h\f\i\u\t\u\3\1\9\f\d\f\i\7\o\z\1\x\o\4\a\m\z\y\1\q\d\q\h\k\8\5\9\v\8\5\o\y\7\o\g\v\p\2\9\7\4\i\s\s\h\5\f\7\n\2\9\y\6\2\k\n\e\s\4\s\2\c\e\r\s\v\1\a\o\k\n\y\0\o\r\1\b\c\2\d\5\h\7\o\2\x\h\8\o\6\k\t\n\c\7\l\x\7\7\3\m\t\g\p\v\n\3\h\9\f\7\0\0\7\8\j\u\1\l\w\j\m\o\s\s\l\8\b\z\8\7\z\i\p\l\e\7\q\b\p\n\3\j\x\r\u\9\j\s\8\y\6\3\j\q\j\o\3\w\2\c\b\1\a\s\h\v\7\5\t\f\i\m\q\q\9\w\6\c\3 ]] 00:33:32.362 15:27:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:32.362 15:27:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:33:32.621 [2024-07-23 15:27:27.846776] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:32.621 [2024-07-23 15:27:27.846979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126394 ] 00:33:32.621 [2024-07-23 15:27:27.999258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.621 [2024-07-23 15:27:28.047751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.138  Copying: 512/512 [B] (average 166 kBps) 00:33:33.138 00:33:33.139 15:27:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s8i81owbn956d4l70bikqxf6iwbj65j1ske5yh1q1l1mh0i647oilyvckldjzgjxf3na8a2li8cu7oxfqx6ze2dqufui0ubz1l7qaaxxltirpi4nm080udrdd37hppse9cyngyfrx9j2d6k2qovp1bxtjeqs0qayi0q0b196le383a3fw728fn52g64jwnvckxv6m0l2srtmv39l2hmc56lcpm8py0uh6fbsaguml1z8feec9mee7rvjhrcj3u37nd0ntmbfqcipmmqntiqtu72ke4t24xs500sl9532y2in38k99qdrf0byzwm1viyltrxxwohfiutu319fdfi7oz1xo4amzy1qdqhk859v85oy7ogvp2974issh5f7n29y62knes4s2cersv1aokny0or1bc2d5h7o2xh8o6ktnc7lx773mtgpvn3h9f70078ju1lwjmossl8bz87ziple7qbpn3jxru9js8y63jqjo3w2cb1ashv75tfimqq9w6c3 == \s\8\i\8\1\o\w\b\n\9\5\6\d\4\l\7\0\b\i\k\q\x\f\6\i\w\b\j\6\5\j\1\s\k\e\5\y\h\1\q\1\l\1\m\h\0\i\6\4\7\o\i\l\y\v\c\k\l\d\j\z\g\j\x\f\3\n\a\8\a\2\l\i\8\c\u\7\o\x\f\q\x\6\z\e\2\d\q\u\f\u\i\0\u\b\z\1\l\7\q\a\a\x\x\l\t\i\r\p\i\4\n\m\0\8\0\u\d\r\d\d\3\7\h\p\p\s\e\9\c\y\n\g\y\f\r\x\9\j\2\d\6\k\2\q\o\v\p\1\b\x\t\j\e\q\s\0\q\a\y\i\0\q\0\b\1\9\6\l\e\3\8\3\a\3\f\w\7\2\8\f\n\5\2\g\6\4\j\w\n\v\c\k\x\v\6\m\0\l\2\s\r\t\m\v\3\9\l\2\h\m\c\5\6\l\c\p\m\8\p\y\0\u\h\6\f\b\s\a\g\u\m\l\1\z\8\f\e\e\c\9\m\e\e\7\r\v\j\h\r\c\j\3\u\3\7\n\d\0\n\t\m\b\f\q\c\i\p\m\m\q\n\t\i\q\t\u\7\2\k\e\4\t\2\4\x\s\5\0\0\s\l\9\5\3\2\y\2\i\n\3\8\k\9\9\q\d\r\f\0\b\y\z\w\m\1\v\i\y\l\t\r\x\x\w\o\h\f\i\u\t\u\3\1\9\f\d\f\i\7\o\z\1\x\o\4\a\m\z\y\1\q\d\q\h\k\8\5\9\v\8\5\o\y\7\o\g\v\p\2\9\7\4\i\s\s\h\5\f\7\n\2\9\y\6\2\k\n\e\s\4\s\2\c\e\r\s\v\1\a\o\k\n\y\0\o\r\1\b\c\2\d\5\h\7\o\2\x\h\8\o\6\k\t\n\c\7\l\x\7\7\3\m\t\g\p\v\n\3\h\9\f\7\0\0\7\8\j\u\1\l\w\j\m\o\s\s\l\8\b\z\8\7\z\i\p\l\e\7\q\b\p\n\3\j\x\r\u\9\j\s\8\y\6\3\j\q\j\o\3\w\2\c\b\1\a\s\h\v\7\5\t\f\i\m\q\q\9\w\6\c\3 ]] 00:33:33.139 15:27:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:33.139 15:27:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:33:33.139 [2024-07-23 15:27:28.427815] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:33.139 [2024-07-23 15:27:28.428005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126398 ] 00:33:33.397 [2024-07-23 15:27:28.579663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.398 [2024-07-23 15:27:28.624409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.656  Copying: 512/512 [B] (average 100 kBps) 00:33:33.656 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s8i81owbn956d4l70bikqxf6iwbj65j1ske5yh1q1l1mh0i647oilyvckldjzgjxf3na8a2li8cu7oxfqx6ze2dqufui0ubz1l7qaaxxltirpi4nm080udrdd37hppse9cyngyfrx9j2d6k2qovp1bxtjeqs0qayi0q0b196le383a3fw728fn52g64jwnvckxv6m0l2srtmv39l2hmc56lcpm8py0uh6fbsaguml1z8feec9mee7rvjhrcj3u37nd0ntmbfqcipmmqntiqtu72ke4t24xs500sl9532y2in38k99qdrf0byzwm1viyltrxxwohfiutu319fdfi7oz1xo4amzy1qdqhk859v85oy7ogvp2974issh5f7n29y62knes4s2cersv1aokny0or1bc2d5h7o2xh8o6ktnc7lx773mtgpvn3h9f70078ju1lwjmossl8bz87ziple7qbpn3jxru9js8y63jqjo3w2cb1ashv75tfimqq9w6c3 == \s\8\i\8\1\o\w\b\n\9\5\6\d\4\l\7\0\b\i\k\q\x\f\6\i\w\b\j\6\5\j\1\s\k\e\5\y\h\1\q\1\l\1\m\h\0\i\6\4\7\o\i\l\y\v\c\k\l\d\j\z\g\j\x\f\3\n\a\8\a\2\l\i\8\c\u\7\o\x\f\q\x\6\z\e\2\d\q\u\f\u\i\0\u\b\z\1\l\7\q\a\a\x\x\l\t\i\r\p\i\4\n\m\0\8\0\u\d\r\d\d\3\7\h\p\p\s\e\9\c\y\n\g\y\f\r\x\9\j\2\d\6\k\2\q\o\v\p\1\b\x\t\j\e\q\s\0\q\a\y\i\0\q\0\b\1\9\6\l\e\3\8\3\a\3\f\w\7\2\8\f\n\5\2\g\6\4\j\w\n\v\c\k\x\v\6\m\0\l\2\s\r\t\m\v\3\9\l\2\h\m\c\5\6\l\c\p\m\8\p\y\0\u\h\6\f\b\s\a\g\u\m\l\1\z\8\f\e\e\c\9\m\e\e\7\r\v\j\h\r\c\j\3\u\3\7\n\d\0\n\t\m\b\f\q\c\i\p\m\m\q\n\t\i\q\t\u\7\2\k\e\4\t\2\4\x\s\5\0\0\s\l\9\5\3\2\y\2\i\n\3\8\k\9\9\q\d\r\f\0\b\y\z\w\m\1\v\i\y\l\t\r\x\x\w\o\h\f\i\u\t\u\3\1\9\f\d\f\i\7\o\z\1\x\o\4\a\m\z\y\1\q\d\q\h\k\8\5\9\v\8\5\o\y\7\o\g\v\p\2\9\7\4\i\s\s\h\5\f\7\n\2\9\y\6\2\k\n\e\s\4\s\2\c\e\r\s\v\1\a\o\k\n\y\0\o\r\1\b\c\2\d\5\h\7\o\2\x\h\8\o\6\k\t\n\c\7\l\x\7\7\3\m\t\g\p\v\n\3\h\9\f\7\0\0\7\8\j\u\1\l\w\j\m\o\s\s\l\8\b\z\8\7\z\i\p\l\e\7\q\b\p\n\3\j\x\r\u\9\j\s\8\y\6\3\j\q\j\o\3\w\2\c\b\1\a\s\h\v\7\5\t\f\i\m\q\q\9\w\6\c\3 ]] 00:33:33.656 00:33:33.656 real 0m4.634s 00:33:33.656 user 0m2.157s 00:33:33.656 sys 0m1.508s 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:33:33.656 ************************************ 00:33:33.656 END TEST dd_flags_misc 00:33:33.656 ************************************ 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:33:33.656 * Second test run, disabling liburing, forcing AIO 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.656 15:27:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:33.656 ************************************ 00:33:33.656 START TEST dd_flag_append_forced_aio 00:33:33.656 ************************************ 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=rikntixiw1gm17eihhloo53h60kzeknb 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=aopiw1urgkuy5z91ob7qwp1jkgdes5ug 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s rikntixiw1gm17eihhloo53h60kzeknb 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s aopiw1urgkuy5z91ob7qwp1jkgdes5ug 00:33:33.656 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:33:33.656 [2024-07-23 15:27:29.078647] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:33.656 [2024-07-23 15:27:29.078860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126430 ] 00:33:33.915 [2024-07-23 15:27:29.231583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.915 [2024-07-23 15:27:29.277188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.174  Copying: 32/32 [B] (average 31 kBps) 00:33:34.174 00:33:34.174 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ aopiw1urgkuy5z91ob7qwp1jkgdes5ugrikntixiw1gm17eihhloo53h60kzeknb == \a\o\p\i\w\1\u\r\g\k\u\y\5\z\9\1\o\b\7\q\w\p\1\j\k\g\d\e\s\5\u\g\r\i\k\n\t\i\x\i\w\1\g\m\1\7\e\i\h\h\l\o\o\5\3\h\6\0\k\z\e\k\n\b ]] 00:33:34.174 00:33:34.174 real 0m0.600s 00:33:34.174 user 0m0.290s 00:33:34.174 sys 0m0.184s 00:33:34.174 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:34.174 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:34.174 ************************************ 00:33:34.174 END TEST dd_flag_append_forced_aio 00:33:34.174 ************************************ 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:34.433 ************************************ 00:33:34.433 START TEST dd_flag_directory_forced_aio 00:33:34.433 ************************************ 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:34.433 15:27:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:34.433 [2024-07-23 15:27:29.728779] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:34.433 [2024-07-23 15:27:29.728992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126459 ] 00:33:34.693 [2024-07-23 15:27:29.879401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.693 [2024-07-23 15:27:29.927554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.693 [2024-07-23 15:27:29.993188] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:33:34.693 [2024-07-23 15:27:29.993261] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:33:34.693 [2024-07-23 15:27:29.993284] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:34.693 [2024-07-23 15:27:30.100385] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:34.951 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:33:34.951 [2024-07-23 15:27:30.287515] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:34.951 [2024-07-23 15:27:30.287727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126468 ] 00:33:35.210 [2024-07-23 15:27:30.439332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.210 [2024-07-23 15:27:30.484906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.210 [2024-07-23 15:27:30.550932] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:33:35.210 [2024-07-23 15:27:30.551019] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:33:35.210 [2024-07-23 15:27:30.551043] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:35.468 [2024-07-23 15:27:30.658367] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:35.468 00:33:35.468 real 0m1.128s 00:33:35.468 user 0m0.549s 00:33:35.468 sys 0m0.378s 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:35.468 ************************************ 00:33:35.468 END TEST dd_flag_directory_forced_aio 00:33:35.468 ************************************ 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:35.468 ************************************ 00:33:35.468 START TEST dd_flag_nofollow_forced_aio 00:33:35.468 ************************************ 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:35.468 15:27:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:35.726 [2024-07-23 15:27:30.925140] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:35.726 [2024-07-23 15:27:30.925356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126499 ] 00:33:35.726 [2024-07-23 15:27:31.076110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.726 [2024-07-23 15:27:31.121057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.984 [2024-07-23 15:27:31.187007] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:33:35.984 [2024-07-23 15:27:31.187077] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:33:35.984 [2024-07-23 15:27:31.187116] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:35.984 [2024-07-23 15:27:31.294348] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:36.243 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:33:36.243 [2024-07-23 15:27:31.496718] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:36.244 [2024-07-23 15:27:31.496928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126504 ] 00:33:36.244 [2024-07-23 15:27:31.645102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.503 [2024-07-23 15:27:31.694243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.503 [2024-07-23 15:27:31.760024] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:33:36.503 [2024-07-23 15:27:31.760102] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:33:36.503 [2024-07-23 15:27:31.760127] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:36.503 [2024-07-23 15:27:31.867619] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:33:36.762 15:27:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:36.762 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:36.762 [2024-07-23 15:27:32.074089] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:36.762 [2024-07-23 15:27:32.074499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126518 ] 00:33:37.021 [2024-07-23 15:27:32.227076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.021 [2024-07-23 15:27:32.272121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.279  Copying: 512/512 [B] (average 500 kBps) 00:33:37.279 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ rak0i1a78d4m2tw0ov6tcjp3l2nvjd5ljhxa8baqk58sbwpupct0or7w49nvpvmxd9jwja0xm12hz2xdwjtm19ia466g7rpf40etjyl7umkpfj5ed8aml1hxcqn5hb987nvy4riqfe0zp0nmn7yiyojon0pxzif9u0nqcz7adbnxv5sf9hyvxg2b5tfwy39kpb71g3s9yatgotldmi0i2zsxbhnoe266f03aq2wut4ptjcsan52d5q30qhyzvmbv7ikv9dnfwofwkdgvi5cczjsmtfufhqbc28ixfexr5a03la9rsy86zkyqbxckijha2o586un0n4glon3zooof93sfq5c1ja6p5gbi39ggumu9q3m95jc9yehkhnm2bov1qdhkr2a1j77qsz906xo3sywb3t6lrengd54crh5ms4b2dsij2c58js9z67qkroq2cah0l0sl6c69r3ddhbij2ealuesd7vnlgckedk6r551q5fq0ufp6bwkwzvmo7uco == \r\a\k\0\i\1\a\7\8\d\4\m\2\t\w\0\o\v\6\t\c\j\p\3\l\2\n\v\j\d\5\l\j\h\x\a\8\b\a\q\k\5\8\s\b\w\p\u\p\c\t\0\o\r\7\w\4\9\n\v\p\v\m\x\d\9\j\w\j\a\0\x\m\1\2\h\z\2\x\d\w\j\t\m\1\9\i\a\4\6\6\g\7\r\p\f\4\0\e\t\j\y\l\7\u\m\k\p\f\j\5\e\d\8\a\m\l\1\h\x\c\q\n\5\h\b\9\8\7\n\v\y\4\r\i\q\f\e\0\z\p\0\n\m\n\7\y\i\y\o\j\o\n\0\p\x\z\i\f\9\u\0\n\q\c\z\7\a\d\b\n\x\v\5\s\f\9\h\y\v\x\g\2\b\5\t\f\w\y\3\9\k\p\b\7\1\g\3\s\9\y\a\t\g\o\t\l\d\m\i\0\i\2\z\s\x\b\h\n\o\e\2\6\6\f\0\3\a\q\2\w\u\t\4\p\t\j\c\s\a\n\5\2\d\5\q\3\0\q\h\y\z\v\m\b\v\7\i\k\v\9\d\n\f\w\o\f\w\k\d\g\v\i\5\c\c\z\j\s\m\t\f\u\f\h\q\b\c\2\8\i\x\f\e\x\r\5\a\0\3\l\a\9\r\s\y\8\6\z\k\y\q\b\x\c\k\i\j\h\a\2\o\5\8\6\u\n\0\n\4\g\l\o\n\3\z\o\o\o\f\9\3\s\f\q\5\c\1\j\a\6\p\5\g\b\i\3\9\g\g\u\m\u\9\q\3\m\9\5\j\c\9\y\e\h\k\h\n\m\2\b\o\v\1\q\d\h\k\r\2\a\1\j\7\7\q\s\z\9\0\6\x\o\3\s\y\w\b\3\t\6\l\r\e\n\g\d\5\4\c\r\h\5\m\s\4\b\2\d\s\i\j\2\c\5\8\j\s\9\z\6\7\q\k\r\o\q\2\c\a\h\0\l\0\s\l\6\c\6\9\r\3\d\d\h\b\i\j\2\e\a\l\u\e\s\d\7\v\n\l\g\c\k\e\d\k\6\r\5\5\1\q\5\f\q\0\u\f\p\6\b\w\k\w\z\v\m\o\7\u\c\o ]] 00:33:37.280 ************************************ 00:33:37.280 END TEST dd_flag_nofollow_forced_aio 00:33:37.280 ************************************ 00:33:37.280 00:33:37.280 real 0m1.739s 00:33:37.280 user 0m0.826s 00:33:37.280 sys 0m0.593s 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:37.280 ************************************ 00:33:37.280 START TEST dd_flag_noatime_forced_aio 00:33:37.280 ************************************ 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721748452 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721748452 00:33:37.280 15:27:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:33:38.654 15:27:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:38.654 [2024-07-23 15:27:33.742695] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:38.654 [2024-07-23 15:27:33.742896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126560 ] 00:33:38.654 [2024-07-23 15:27:33.900022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.654 [2024-07-23 15:27:33.953960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.912  Copying: 512/512 [B] (average 500 kBps) 00:33:38.912 00:33:38.912 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:38.912 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721748452 )) 00:33:38.912 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:38.912 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721748452 )) 00:33:38.912 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:39.171 [2024-07-23 15:27:34.371054] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:39.171 [2024-07-23 15:27:34.371237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126571 ] 00:33:39.171 [2024-07-23 15:27:34.524162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.171 [2024-07-23 15:27:34.569348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.688  Copying: 512/512 [B] (average 500 kBps) 00:33:39.688 00:33:39.688 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:39.688 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721748454 )) 00:33:39.688 00:33:39.688 real 0m2.248s 00:33:39.688 user 0m0.570s 00:33:39.688 sys 0m0.441s 00:33:39.688 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:39.688 15:27:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:39.688 ************************************ 00:33:39.688 END TEST dd_flag_noatime_forced_aio 00:33:39.688 ************************************ 00:33:39.688 15:27:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:39.688 15:27:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:33:39.688 15:27:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:39.689 ************************************ 00:33:39.689 START TEST dd_flags_misc_forced_aio 00:33:39.689 ************************************ 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:39.689 15:27:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:33:39.689 [2024-07-23 15:27:35.033031] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:39.689 [2024-07-23 15:27:35.033242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126600 ] 00:33:39.947 [2024-07-23 15:27:35.184754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.947 [2024-07-23 15:27:35.233112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.206  Copying: 512/512 [B] (average 500 kBps) 00:33:40.206 00:33:40.206 15:27:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mzp5oin8ln0g1fn5e2yr4jgymwgmk3lxdrnz23d48iorortdwuwit7xv9ct1evdk6kpkj1ycmothoal9xfmsmfldmr51fzth673h40j6zl0q33jezqvmhl3bpxlpvvv2ogmwnwgi84vp3891qu4lzgmpr4rbccp14l4ms6k53co6n141v7zxzqrjj28vq5fcq10xeeok774u4kchvt231nnnjy4t89c3drmljxcmlx0l3e1fi98l8eg2wyep1bicldkhj6osa2kj19qv1hhpww12qi03zdb00apg5t3ciaeu1zo35prezuw9jtgsoqgh84lsrv6tcfww4xfi1rq3e3emsye3tdkfoicwxv5vkghe23l92qw2c5bmy42bz6f2iho7bn4omh017fcyvgpfrmh26ff5ehovb3w6wbf2u1b9ljshsopdfr19muou6abtbdsllny55ugn18d6u85z0eay5i4lm5iykntt2w1qec92k13jt56x31155y7j2d4a == \m\z\p\5\o\i\n\8\l\n\0\g\1\f\n\5\e\2\y\r\4\j\g\y\m\w\g\m\k\3\l\x\d\r\n\z\2\3\d\4\8\i\o\r\o\r\t\d\w\u\w\i\t\7\x\v\9\c\t\1\e\v\d\k\6\k\p\k\j\1\y\c\m\o\t\h\o\a\l\9\x\f\m\s\m\f\l\d\m\r\5\1\f\z\t\h\6\7\3\h\4\0\j\6\z\l\0\q\3\3\j\e\z\q\v\m\h\l\3\b\p\x\l\p\v\v\v\2\o\g\m\w\n\w\g\i\8\4\v\p\3\8\9\1\q\u\4\l\z\g\m\p\r\4\r\b\c\c\p\1\4\l\4\m\s\6\k\5\3\c\o\6\n\1\4\1\v\7\z\x\z\q\r\j\j\2\8\v\q\5\f\c\q\1\0\x\e\e\o\k\7\7\4\u\4\k\c\h\v\t\2\3\1\n\n\n\j\y\4\t\8\9\c\3\d\r\m\l\j\x\c\m\l\x\0\l\3\e\1\f\i\9\8\l\8\e\g\2\w\y\e\p\1\b\i\c\l\d\k\h\j\6\o\s\a\2\k\j\1\9\q\v\1\h\h\p\w\w\1\2\q\i\0\3\z\d\b\0\0\a\p\g\5\t\3\c\i\a\e\u\1\z\o\3\5\p\r\e\z\u\w\9\j\t\g\s\o\q\g\h\8\4\l\s\r\v\6\t\c\f\w\w\4\x\f\i\1\r\q\3\e\3\e\m\s\y\e\3\t\d\k\f\o\i\c\w\x\v\5\v\k\g\h\e\2\3\l\9\2\q\w\2\c\5\b\m\y\4\2\b\z\6\f\2\i\h\o\7\b\n\4\o\m\h\0\1\7\f\c\y\v\g\p\f\r\m\h\2\6\f\f\5\e\h\o\v\b\3\w\6\w\b\f\2\u\1\b\9\l\j\s\h\s\o\p\d\f\r\1\9\m\u\o\u\6\a\b\t\b\d\s\l\l\n\y\5\5\u\g\n\1\8\d\6\u\8\5\z\0\e\a\y\5\i\4\l\m\5\i\y\k\n\t\t\2\w\1\q\e\c\9\2\k\1\3\j\t\5\6\x\3\1\1\5\5\y\7\j\2\d\4\a ]] 00:33:40.206 15:27:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:40.206 15:27:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:33:40.206 [2024-07-23 15:27:35.612136] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:40.206 [2024-07-23 15:27:35.612331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126607 ] 00:33:40.465 [2024-07-23 15:27:35.764405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.465 [2024-07-23 15:27:35.809282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.724  Copying: 512/512 [B] (average 500 kBps) 00:33:40.724 00:33:40.724 15:27:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mzp5oin8ln0g1fn5e2yr4jgymwgmk3lxdrnz23d48iorortdwuwit7xv9ct1evdk6kpkj1ycmothoal9xfmsmfldmr51fzth673h40j6zl0q33jezqvmhl3bpxlpvvv2ogmwnwgi84vp3891qu4lzgmpr4rbccp14l4ms6k53co6n141v7zxzqrjj28vq5fcq10xeeok774u4kchvt231nnnjy4t89c3drmljxcmlx0l3e1fi98l8eg2wyep1bicldkhj6osa2kj19qv1hhpww12qi03zdb00apg5t3ciaeu1zo35prezuw9jtgsoqgh84lsrv6tcfww4xfi1rq3e3emsye3tdkfoicwxv5vkghe23l92qw2c5bmy42bz6f2iho7bn4omh017fcyvgpfrmh26ff5ehovb3w6wbf2u1b9ljshsopdfr19muou6abtbdsllny55ugn18d6u85z0eay5i4lm5iykntt2w1qec92k13jt56x31155y7j2d4a == \m\z\p\5\o\i\n\8\l\n\0\g\1\f\n\5\e\2\y\r\4\j\g\y\m\w\g\m\k\3\l\x\d\r\n\z\2\3\d\4\8\i\o\r\o\r\t\d\w\u\w\i\t\7\x\v\9\c\t\1\e\v\d\k\6\k\p\k\j\1\y\c\m\o\t\h\o\a\l\9\x\f\m\s\m\f\l\d\m\r\5\1\f\z\t\h\6\7\3\h\4\0\j\6\z\l\0\q\3\3\j\e\z\q\v\m\h\l\3\b\p\x\l\p\v\v\v\2\o\g\m\w\n\w\g\i\8\4\v\p\3\8\9\1\q\u\4\l\z\g\m\p\r\4\r\b\c\c\p\1\4\l\4\m\s\6\k\5\3\c\o\6\n\1\4\1\v\7\z\x\z\q\r\j\j\2\8\v\q\5\f\c\q\1\0\x\e\e\o\k\7\7\4\u\4\k\c\h\v\t\2\3\1\n\n\n\j\y\4\t\8\9\c\3\d\r\m\l\j\x\c\m\l\x\0\l\3\e\1\f\i\9\8\l\8\e\g\2\w\y\e\p\1\b\i\c\l\d\k\h\j\6\o\s\a\2\k\j\1\9\q\v\1\h\h\p\w\w\1\2\q\i\0\3\z\d\b\0\0\a\p\g\5\t\3\c\i\a\e\u\1\z\o\3\5\p\r\e\z\u\w\9\j\t\g\s\o\q\g\h\8\4\l\s\r\v\6\t\c\f\w\w\4\x\f\i\1\r\q\3\e\3\e\m\s\y\e\3\t\d\k\f\o\i\c\w\x\v\5\v\k\g\h\e\2\3\l\9\2\q\w\2\c\5\b\m\y\4\2\b\z\6\f\2\i\h\o\7\b\n\4\o\m\h\0\1\7\f\c\y\v\g\p\f\r\m\h\2\6\f\f\5\e\h\o\v\b\3\w\6\w\b\f\2\u\1\b\9\l\j\s\h\s\o\p\d\f\r\1\9\m\u\o\u\6\a\b\t\b\d\s\l\l\n\y\5\5\u\g\n\1\8\d\6\u\8\5\z\0\e\a\y\5\i\4\l\m\5\i\y\k\n\t\t\2\w\1\q\e\c\9\2\k\1\3\j\t\5\6\x\3\1\1\5\5\y\7\j\2\d\4\a ]] 00:33:40.724 15:27:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:40.724 15:27:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:33:40.983 [2024-07-23 15:27:36.194508] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:40.983 [2024-07-23 15:27:36.194688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126617 ] 00:33:40.983 [2024-07-23 15:27:36.347685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.983 [2024-07-23 15:27:36.392932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.501  Copying: 512/512 [B] (average 62 kBps) 00:33:41.501 00:33:41.501 15:27:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mzp5oin8ln0g1fn5e2yr4jgymwgmk3lxdrnz23d48iorortdwuwit7xv9ct1evdk6kpkj1ycmothoal9xfmsmfldmr51fzth673h40j6zl0q33jezqvmhl3bpxlpvvv2ogmwnwgi84vp3891qu4lzgmpr4rbccp14l4ms6k53co6n141v7zxzqrjj28vq5fcq10xeeok774u4kchvt231nnnjy4t89c3drmljxcmlx0l3e1fi98l8eg2wyep1bicldkhj6osa2kj19qv1hhpww12qi03zdb00apg5t3ciaeu1zo35prezuw9jtgsoqgh84lsrv6tcfww4xfi1rq3e3emsye3tdkfoicwxv5vkghe23l92qw2c5bmy42bz6f2iho7bn4omh017fcyvgpfrmh26ff5ehovb3w6wbf2u1b9ljshsopdfr19muou6abtbdsllny55ugn18d6u85z0eay5i4lm5iykntt2w1qec92k13jt56x31155y7j2d4a == \m\z\p\5\o\i\n\8\l\n\0\g\1\f\n\5\e\2\y\r\4\j\g\y\m\w\g\m\k\3\l\x\d\r\n\z\2\3\d\4\8\i\o\r\o\r\t\d\w\u\w\i\t\7\x\v\9\c\t\1\e\v\d\k\6\k\p\k\j\1\y\c\m\o\t\h\o\a\l\9\x\f\m\s\m\f\l\d\m\r\5\1\f\z\t\h\6\7\3\h\4\0\j\6\z\l\0\q\3\3\j\e\z\q\v\m\h\l\3\b\p\x\l\p\v\v\v\2\o\g\m\w\n\w\g\i\8\4\v\p\3\8\9\1\q\u\4\l\z\g\m\p\r\4\r\b\c\c\p\1\4\l\4\m\s\6\k\5\3\c\o\6\n\1\4\1\v\7\z\x\z\q\r\j\j\2\8\v\q\5\f\c\q\1\0\x\e\e\o\k\7\7\4\u\4\k\c\h\v\t\2\3\1\n\n\n\j\y\4\t\8\9\c\3\d\r\m\l\j\x\c\m\l\x\0\l\3\e\1\f\i\9\8\l\8\e\g\2\w\y\e\p\1\b\i\c\l\d\k\h\j\6\o\s\a\2\k\j\1\9\q\v\1\h\h\p\w\w\1\2\q\i\0\3\z\d\b\0\0\a\p\g\5\t\3\c\i\a\e\u\1\z\o\3\5\p\r\e\z\u\w\9\j\t\g\s\o\q\g\h\8\4\l\s\r\v\6\t\c\f\w\w\4\x\f\i\1\r\q\3\e\3\e\m\s\y\e\3\t\d\k\f\o\i\c\w\x\v\5\v\k\g\h\e\2\3\l\9\2\q\w\2\c\5\b\m\y\4\2\b\z\6\f\2\i\h\o\7\b\n\4\o\m\h\0\1\7\f\c\y\v\g\p\f\r\m\h\2\6\f\f\5\e\h\o\v\b\3\w\6\w\b\f\2\u\1\b\9\l\j\s\h\s\o\p\d\f\r\1\9\m\u\o\u\6\a\b\t\b\d\s\l\l\n\y\5\5\u\g\n\1\8\d\6\u\8\5\z\0\e\a\y\5\i\4\l\m\5\i\y\k\n\t\t\2\w\1\q\e\c\9\2\k\1\3\j\t\5\6\x\3\1\1\5\5\y\7\j\2\d\4\a ]] 00:33:41.501 15:27:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:41.501 15:27:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:33:41.501 [2024-07-23 15:27:36.778656] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:41.501 [2024-07-23 15:27:36.778854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126630 ] 00:33:41.501 [2024-07-23 15:27:36.932189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.759 [2024-07-23 15:27:36.980957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.018  Copying: 512/512 [B] (average 166 kBps) 00:33:42.018 00:33:42.018 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mzp5oin8ln0g1fn5e2yr4jgymwgmk3lxdrnz23d48iorortdwuwit7xv9ct1evdk6kpkj1ycmothoal9xfmsmfldmr51fzth673h40j6zl0q33jezqvmhl3bpxlpvvv2ogmwnwgi84vp3891qu4lzgmpr4rbccp14l4ms6k53co6n141v7zxzqrjj28vq5fcq10xeeok774u4kchvt231nnnjy4t89c3drmljxcmlx0l3e1fi98l8eg2wyep1bicldkhj6osa2kj19qv1hhpww12qi03zdb00apg5t3ciaeu1zo35prezuw9jtgsoqgh84lsrv6tcfww4xfi1rq3e3emsye3tdkfoicwxv5vkghe23l92qw2c5bmy42bz6f2iho7bn4omh017fcyvgpfrmh26ff5ehovb3w6wbf2u1b9ljshsopdfr19muou6abtbdsllny55ugn18d6u85z0eay5i4lm5iykntt2w1qec92k13jt56x31155y7j2d4a == \m\z\p\5\o\i\n\8\l\n\0\g\1\f\n\5\e\2\y\r\4\j\g\y\m\w\g\m\k\3\l\x\d\r\n\z\2\3\d\4\8\i\o\r\o\r\t\d\w\u\w\i\t\7\x\v\9\c\t\1\e\v\d\k\6\k\p\k\j\1\y\c\m\o\t\h\o\a\l\9\x\f\m\s\m\f\l\d\m\r\5\1\f\z\t\h\6\7\3\h\4\0\j\6\z\l\0\q\3\3\j\e\z\q\v\m\h\l\3\b\p\x\l\p\v\v\v\2\o\g\m\w\n\w\g\i\8\4\v\p\3\8\9\1\q\u\4\l\z\g\m\p\r\4\r\b\c\c\p\1\4\l\4\m\s\6\k\5\3\c\o\6\n\1\4\1\v\7\z\x\z\q\r\j\j\2\8\v\q\5\f\c\q\1\0\x\e\e\o\k\7\7\4\u\4\k\c\h\v\t\2\3\1\n\n\n\j\y\4\t\8\9\c\3\d\r\m\l\j\x\c\m\l\x\0\l\3\e\1\f\i\9\8\l\8\e\g\2\w\y\e\p\1\b\i\c\l\d\k\h\j\6\o\s\a\2\k\j\1\9\q\v\1\h\h\p\w\w\1\2\q\i\0\3\z\d\b\0\0\a\p\g\5\t\3\c\i\a\e\u\1\z\o\3\5\p\r\e\z\u\w\9\j\t\g\s\o\q\g\h\8\4\l\s\r\v\6\t\c\f\w\w\4\x\f\i\1\r\q\3\e\3\e\m\s\y\e\3\t\d\k\f\o\i\c\w\x\v\5\v\k\g\h\e\2\3\l\9\2\q\w\2\c\5\b\m\y\4\2\b\z\6\f\2\i\h\o\7\b\n\4\o\m\h\0\1\7\f\c\y\v\g\p\f\r\m\h\2\6\f\f\5\e\h\o\v\b\3\w\6\w\b\f\2\u\1\b\9\l\j\s\h\s\o\p\d\f\r\1\9\m\u\o\u\6\a\b\t\b\d\s\l\l\n\y\5\5\u\g\n\1\8\d\6\u\8\5\z\0\e\a\y\5\i\4\l\m\5\i\y\k\n\t\t\2\w\1\q\e\c\9\2\k\1\3\j\t\5\6\x\3\1\1\5\5\y\7\j\2\d\4\a ]] 00:33:42.018 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:33:42.018 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:33:42.018 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:33:42.018 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:42.018 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:42.018 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:33:42.018 [2024-07-23 15:27:37.374756] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:42.018 [2024-07-23 15:27:37.375024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126634 ] 00:33:42.277 [2024-07-23 15:27:37.527590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.277 [2024-07-23 15:27:37.572690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.536  Copying: 512/512 [B] (average 500 kBps) 00:33:42.536 00:33:42.536 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 752ivfjqp22gbk9kly1v09vuo54ms9tgvkqdo1n8xlvsh8jzsiokio4gz4lw6wmfosh5r3xueg78tv0welr60bidcjiccs5pna1dt3jwvxvmrg37fkcc2ypeusm863wxzfry4dl39g76hsxx8atj18uajjbuc0pi7fej7vmtknolluids003tvjwbsrx61n0x69950sd3tzc00xmtc0gh1myz3gex2aa2xbsa0kp22hrerdlk8sbz6zbes8wmcr9f8y50szgwbqlyw9duhvw3wfwli0ikxob9j2dtoqcn673q8rdd07hbh6h0qhsvuazgujs4suihfhjknbnde6bdxvpi09o6ci84ko0axirf6mt73aj5wm42r8twvdkbnrsc5o852k6q4noorjpggwe8fc0c953xqq8vlfavr99hyf3kk3y2qcxlyrn62ppkskogzchw3adxy490evqlz3ejpcfo1511gz78pq1fqeof54q0d2f9j5mtxk55uw124vj == \7\5\2\i\v\f\j\q\p\2\2\g\b\k\9\k\l\y\1\v\0\9\v\u\o\5\4\m\s\9\t\g\v\k\q\d\o\1\n\8\x\l\v\s\h\8\j\z\s\i\o\k\i\o\4\g\z\4\l\w\6\w\m\f\o\s\h\5\r\3\x\u\e\g\7\8\t\v\0\w\e\l\r\6\0\b\i\d\c\j\i\c\c\s\5\p\n\a\1\d\t\3\j\w\v\x\v\m\r\g\3\7\f\k\c\c\2\y\p\e\u\s\m\8\6\3\w\x\z\f\r\y\4\d\l\3\9\g\7\6\h\s\x\x\8\a\t\j\1\8\u\a\j\j\b\u\c\0\p\i\7\f\e\j\7\v\m\t\k\n\o\l\l\u\i\d\s\0\0\3\t\v\j\w\b\s\r\x\6\1\n\0\x\6\9\9\5\0\s\d\3\t\z\c\0\0\x\m\t\c\0\g\h\1\m\y\z\3\g\e\x\2\a\a\2\x\b\s\a\0\k\p\2\2\h\r\e\r\d\l\k\8\s\b\z\6\z\b\e\s\8\w\m\c\r\9\f\8\y\5\0\s\z\g\w\b\q\l\y\w\9\d\u\h\v\w\3\w\f\w\l\i\0\i\k\x\o\b\9\j\2\d\t\o\q\c\n\6\7\3\q\8\r\d\d\0\7\h\b\h\6\h\0\q\h\s\v\u\a\z\g\u\j\s\4\s\u\i\h\f\h\j\k\n\b\n\d\e\6\b\d\x\v\p\i\0\9\o\6\c\i\8\4\k\o\0\a\x\i\r\f\6\m\t\7\3\a\j\5\w\m\4\2\r\8\t\w\v\d\k\b\n\r\s\c\5\o\8\5\2\k\6\q\4\n\o\o\r\j\p\g\g\w\e\8\f\c\0\c\9\5\3\x\q\q\8\v\l\f\a\v\r\9\9\h\y\f\3\k\k\3\y\2\q\c\x\l\y\r\n\6\2\p\p\k\s\k\o\g\z\c\h\w\3\a\d\x\y\4\9\0\e\v\q\l\z\3\e\j\p\c\f\o\1\5\1\1\g\z\7\8\p\q\1\f\q\e\o\f\5\4\q\0\d\2\f\9\j\5\m\t\x\k\5\5\u\w\1\2\4\v\j ]] 00:33:42.536 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:42.536 15:27:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:33:42.536 [2024-07-23 15:27:37.949973] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:42.536 [2024-07-23 15:27:37.950148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126643 ] 00:33:42.795 [2024-07-23 15:27:38.103206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.795 [2024-07-23 15:27:38.150825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.054  Copying: 512/512 [B] (average 500 kBps) 00:33:43.054 00:33:43.054 15:27:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 752ivfjqp22gbk9kly1v09vuo54ms9tgvkqdo1n8xlvsh8jzsiokio4gz4lw6wmfosh5r3xueg78tv0welr60bidcjiccs5pna1dt3jwvxvmrg37fkcc2ypeusm863wxzfry4dl39g76hsxx8atj18uajjbuc0pi7fej7vmtknolluids003tvjwbsrx61n0x69950sd3tzc00xmtc0gh1myz3gex2aa2xbsa0kp22hrerdlk8sbz6zbes8wmcr9f8y50szgwbqlyw9duhvw3wfwli0ikxob9j2dtoqcn673q8rdd07hbh6h0qhsvuazgujs4suihfhjknbnde6bdxvpi09o6ci84ko0axirf6mt73aj5wm42r8twvdkbnrsc5o852k6q4noorjpggwe8fc0c953xqq8vlfavr99hyf3kk3y2qcxlyrn62ppkskogzchw3adxy490evqlz3ejpcfo1511gz78pq1fqeof54q0d2f9j5mtxk55uw124vj == \7\5\2\i\v\f\j\q\p\2\2\g\b\k\9\k\l\y\1\v\0\9\v\u\o\5\4\m\s\9\t\g\v\k\q\d\o\1\n\8\x\l\v\s\h\8\j\z\s\i\o\k\i\o\4\g\z\4\l\w\6\w\m\f\o\s\h\5\r\3\x\u\e\g\7\8\t\v\0\w\e\l\r\6\0\b\i\d\c\j\i\c\c\s\5\p\n\a\1\d\t\3\j\w\v\x\v\m\r\g\3\7\f\k\c\c\2\y\p\e\u\s\m\8\6\3\w\x\z\f\r\y\4\d\l\3\9\g\7\6\h\s\x\x\8\a\t\j\1\8\u\a\j\j\b\u\c\0\p\i\7\f\e\j\7\v\m\t\k\n\o\l\l\u\i\d\s\0\0\3\t\v\j\w\b\s\r\x\6\1\n\0\x\6\9\9\5\0\s\d\3\t\z\c\0\0\x\m\t\c\0\g\h\1\m\y\z\3\g\e\x\2\a\a\2\x\b\s\a\0\k\p\2\2\h\r\e\r\d\l\k\8\s\b\z\6\z\b\e\s\8\w\m\c\r\9\f\8\y\5\0\s\z\g\w\b\q\l\y\w\9\d\u\h\v\w\3\w\f\w\l\i\0\i\k\x\o\b\9\j\2\d\t\o\q\c\n\6\7\3\q\8\r\d\d\0\7\h\b\h\6\h\0\q\h\s\v\u\a\z\g\u\j\s\4\s\u\i\h\f\h\j\k\n\b\n\d\e\6\b\d\x\v\p\i\0\9\o\6\c\i\8\4\k\o\0\a\x\i\r\f\6\m\t\7\3\a\j\5\w\m\4\2\r\8\t\w\v\d\k\b\n\r\s\c\5\o\8\5\2\k\6\q\4\n\o\o\r\j\p\g\g\w\e\8\f\c\0\c\9\5\3\x\q\q\8\v\l\f\a\v\r\9\9\h\y\f\3\k\k\3\y\2\q\c\x\l\y\r\n\6\2\p\p\k\s\k\o\g\z\c\h\w\3\a\d\x\y\4\9\0\e\v\q\l\z\3\e\j\p\c\f\o\1\5\1\1\g\z\7\8\p\q\1\f\q\e\o\f\5\4\q\0\d\2\f\9\j\5\m\t\x\k\5\5\u\w\1\2\4\v\j ]] 00:33:43.054 15:27:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:43.054 15:27:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:33:43.312 [2024-07-23 15:27:38.538604] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:43.312 [2024-07-23 15:27:38.538808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126651 ] 00:33:43.312 [2024-07-23 15:27:38.691969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.312 [2024-07-23 15:27:38.740860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.830  Copying: 512/512 [B] (average 166 kBps) 00:33:43.830 00:33:43.830 15:27:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 752ivfjqp22gbk9kly1v09vuo54ms9tgvkqdo1n8xlvsh8jzsiokio4gz4lw6wmfosh5r3xueg78tv0welr60bidcjiccs5pna1dt3jwvxvmrg37fkcc2ypeusm863wxzfry4dl39g76hsxx8atj18uajjbuc0pi7fej7vmtknolluids003tvjwbsrx61n0x69950sd3tzc00xmtc0gh1myz3gex2aa2xbsa0kp22hrerdlk8sbz6zbes8wmcr9f8y50szgwbqlyw9duhvw3wfwli0ikxob9j2dtoqcn673q8rdd07hbh6h0qhsvuazgujs4suihfhjknbnde6bdxvpi09o6ci84ko0axirf6mt73aj5wm42r8twvdkbnrsc5o852k6q4noorjpggwe8fc0c953xqq8vlfavr99hyf3kk3y2qcxlyrn62ppkskogzchw3adxy490evqlz3ejpcfo1511gz78pq1fqeof54q0d2f9j5mtxk55uw124vj == \7\5\2\i\v\f\j\q\p\2\2\g\b\k\9\k\l\y\1\v\0\9\v\u\o\5\4\m\s\9\t\g\v\k\q\d\o\1\n\8\x\l\v\s\h\8\j\z\s\i\o\k\i\o\4\g\z\4\l\w\6\w\m\f\o\s\h\5\r\3\x\u\e\g\7\8\t\v\0\w\e\l\r\6\0\b\i\d\c\j\i\c\c\s\5\p\n\a\1\d\t\3\j\w\v\x\v\m\r\g\3\7\f\k\c\c\2\y\p\e\u\s\m\8\6\3\w\x\z\f\r\y\4\d\l\3\9\g\7\6\h\s\x\x\8\a\t\j\1\8\u\a\j\j\b\u\c\0\p\i\7\f\e\j\7\v\m\t\k\n\o\l\l\u\i\d\s\0\0\3\t\v\j\w\b\s\r\x\6\1\n\0\x\6\9\9\5\0\s\d\3\t\z\c\0\0\x\m\t\c\0\g\h\1\m\y\z\3\g\e\x\2\a\a\2\x\b\s\a\0\k\p\2\2\h\r\e\r\d\l\k\8\s\b\z\6\z\b\e\s\8\w\m\c\r\9\f\8\y\5\0\s\z\g\w\b\q\l\y\w\9\d\u\h\v\w\3\w\f\w\l\i\0\i\k\x\o\b\9\j\2\d\t\o\q\c\n\6\7\3\q\8\r\d\d\0\7\h\b\h\6\h\0\q\h\s\v\u\a\z\g\u\j\s\4\s\u\i\h\f\h\j\k\n\b\n\d\e\6\b\d\x\v\p\i\0\9\o\6\c\i\8\4\k\o\0\a\x\i\r\f\6\m\t\7\3\a\j\5\w\m\4\2\r\8\t\w\v\d\k\b\n\r\s\c\5\o\8\5\2\k\6\q\4\n\o\o\r\j\p\g\g\w\e\8\f\c\0\c\9\5\3\x\q\q\8\v\l\f\a\v\r\9\9\h\y\f\3\k\k\3\y\2\q\c\x\l\y\r\n\6\2\p\p\k\s\k\o\g\z\c\h\w\3\a\d\x\y\4\9\0\e\v\q\l\z\3\e\j\p\c\f\o\1\5\1\1\g\z\7\8\p\q\1\f\q\e\o\f\5\4\q\0\d\2\f\9\j\5\m\t\x\k\5\5\u\w\1\2\4\v\j ]] 00:33:43.830 15:27:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:33:43.830 15:27:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:33:43.830 [2024-07-23 15:27:39.131190] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:43.830 [2024-07-23 15:27:39.131382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126660 ] 00:33:44.089 [2024-07-23 15:27:39.284349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.089 [2024-07-23 15:27:39.331002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.348  Copying: 512/512 [B] (average 125 kBps) 00:33:44.348 00:33:44.348 ************************************ 00:33:44.348 END TEST dd_flags_misc_forced_aio 00:33:44.348 ************************************ 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 752ivfjqp22gbk9kly1v09vuo54ms9tgvkqdo1n8xlvsh8jzsiokio4gz4lw6wmfosh5r3xueg78tv0welr60bidcjiccs5pna1dt3jwvxvmrg37fkcc2ypeusm863wxzfry4dl39g76hsxx8atj18uajjbuc0pi7fej7vmtknolluids003tvjwbsrx61n0x69950sd3tzc00xmtc0gh1myz3gex2aa2xbsa0kp22hrerdlk8sbz6zbes8wmcr9f8y50szgwbqlyw9duhvw3wfwli0ikxob9j2dtoqcn673q8rdd07hbh6h0qhsvuazgujs4suihfhjknbnde6bdxvpi09o6ci84ko0axirf6mt73aj5wm42r8twvdkbnrsc5o852k6q4noorjpggwe8fc0c953xqq8vlfavr99hyf3kk3y2qcxlyrn62ppkskogzchw3adxy490evqlz3ejpcfo1511gz78pq1fqeof54q0d2f9j5mtxk55uw124vj == \7\5\2\i\v\f\j\q\p\2\2\g\b\k\9\k\l\y\1\v\0\9\v\u\o\5\4\m\s\9\t\g\v\k\q\d\o\1\n\8\x\l\v\s\h\8\j\z\s\i\o\k\i\o\4\g\z\4\l\w\6\w\m\f\o\s\h\5\r\3\x\u\e\g\7\8\t\v\0\w\e\l\r\6\0\b\i\d\c\j\i\c\c\s\5\p\n\a\1\d\t\3\j\w\v\x\v\m\r\g\3\7\f\k\c\c\2\y\p\e\u\s\m\8\6\3\w\x\z\f\r\y\4\d\l\3\9\g\7\6\h\s\x\x\8\a\t\j\1\8\u\a\j\j\b\u\c\0\p\i\7\f\e\j\7\v\m\t\k\n\o\l\l\u\i\d\s\0\0\3\t\v\j\w\b\s\r\x\6\1\n\0\x\6\9\9\5\0\s\d\3\t\z\c\0\0\x\m\t\c\0\g\h\1\m\y\z\3\g\e\x\2\a\a\2\x\b\s\a\0\k\p\2\2\h\r\e\r\d\l\k\8\s\b\z\6\z\b\e\s\8\w\m\c\r\9\f\8\y\5\0\s\z\g\w\b\q\l\y\w\9\d\u\h\v\w\3\w\f\w\l\i\0\i\k\x\o\b\9\j\2\d\t\o\q\c\n\6\7\3\q\8\r\d\d\0\7\h\b\h\6\h\0\q\h\s\v\u\a\z\g\u\j\s\4\s\u\i\h\f\h\j\k\n\b\n\d\e\6\b\d\x\v\p\i\0\9\o\6\c\i\8\4\k\o\0\a\x\i\r\f\6\m\t\7\3\a\j\5\w\m\4\2\r\8\t\w\v\d\k\b\n\r\s\c\5\o\8\5\2\k\6\q\4\n\o\o\r\j\p\g\g\w\e\8\f\c\0\c\9\5\3\x\q\q\8\v\l\f\a\v\r\9\9\h\y\f\3\k\k\3\y\2\q\c\x\l\y\r\n\6\2\p\p\k\s\k\o\g\z\c\h\w\3\a\d\x\y\4\9\0\e\v\q\l\z\3\e\j\p\c\f\o\1\5\1\1\g\z\7\8\p\q\1\f\q\e\o\f\5\4\q\0\d\2\f\9\j\5\m\t\x\k\5\5\u\w\1\2\4\v\j ]] 00:33:44.348 00:33:44.348 real 0m4.698s 00:33:44.348 user 0m2.208s 00:33:44.348 sys 0m1.533s 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:33:44.348 ************************************ 00:33:44.348 END TEST spdk_dd_posix 00:33:44.348 ************************************ 00:33:44.348 00:33:44.348 real 0m21.474s 00:33:44.348 user 0m9.067s 00:33:44.348 sys 0m6.736s 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:44.348 15:27:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:33:44.348 15:27:39 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:33:44.348 15:27:39 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:33:44.348 15:27:39 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:44.348 15:27:39 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:44.348 15:27:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:33:44.348 ************************************ 00:33:44.348 START TEST spdk_dd_malloc 00:33:44.348 ************************************ 00:33:44.348 15:27:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:33:44.607 * Looking for test storage... 00:33:44.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # export PATH 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:33:44.607 ************************************ 00:33:44.607 START TEST dd_malloc_copy 00:33:44.607 ************************************ 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:33:44.607 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:33:44.608 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:33:44.608 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:33:44.608 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:33:44.608 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:33:44.608 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:33:44.608 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:33:44.608 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:33:44.608 15:27:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:33:44.608 { 00:33:44.608 "subsystems": [ 00:33:44.608 { 00:33:44.608 "subsystem": "bdev", 00:33:44.608 "config": [ 00:33:44.608 { 00:33:44.608 "params": { 00:33:44.608 "block_size": 512, 00:33:44.608 "num_blocks": 1048576, 00:33:44.608 "name": "malloc0" 00:33:44.608 }, 00:33:44.608 "method": "bdev_malloc_create" 00:33:44.608 }, 00:33:44.608 { 00:33:44.608 "params": { 00:33:44.608 "block_size": 512, 00:33:44.608 "num_blocks": 1048576, 00:33:44.608 "name": "malloc1" 00:33:44.608 }, 00:33:44.608 "method": "bdev_malloc_create" 00:33:44.608 }, 00:33:44.608 { 00:33:44.608 "method": "bdev_wait_for_examine" 00:33:44.608 } 00:33:44.608 ] 00:33:44.608 } 00:33:44.608 ] 00:33:44.608 } 00:33:44.608 [2024-07-23 15:27:39.945329] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:44.608 [2024-07-23 15:27:39.945548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126735 ] 00:33:44.866 [2024-07-23 15:27:40.095966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.866 [2024-07-23 15:27:40.141604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.005  Copying: 211/512 [MB] (211 MBps) Copying: 425/512 [MB] (213 MBps) Copying: 512/512 [MB] (average 212 MBps) 00:33:48.005 00:33:48.005 15:27:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:33:48.005 15:27:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:33:48.005 15:27:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:33:48.005 15:27:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:33:48.264 { 00:33:48.264 "subsystems": [ 00:33:48.264 { 00:33:48.264 "subsystem": "bdev", 00:33:48.264 "config": [ 00:33:48.264 { 00:33:48.264 "params": { 00:33:48.264 "block_size": 512, 00:33:48.264 "num_blocks": 1048576, 00:33:48.264 "name": "malloc0" 00:33:48.264 }, 00:33:48.264 "method": "bdev_malloc_create" 00:33:48.264 }, 00:33:48.264 { 00:33:48.264 "params": { 00:33:48.264 "block_size": 512, 00:33:48.264 "num_blocks": 1048576, 00:33:48.264 "name": "malloc1" 00:33:48.264 }, 00:33:48.264 "method": "bdev_malloc_create" 00:33:48.264 }, 00:33:48.264 { 00:33:48.264 "method": "bdev_wait_for_examine" 00:33:48.264 } 00:33:48.264 ] 00:33:48.264 } 00:33:48.264 ] 00:33:48.264 } 00:33:48.264 [2024-07-23 15:27:43.485093] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:48.264 [2024-07-23 15:27:43.485280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126778 ] 00:33:48.264 [2024-07-23 15:27:43.636114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.264 [2024-07-23 15:27:43.681007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.783  Copying: 211/512 [MB] (211 MBps) Copying: 422/512 [MB] (211 MBps) Copying: 512/512 [MB] (average 211 MBps) 00:33:51.783 00:33:51.783 00:33:51.783 real 0m7.105s 00:33:51.783 user 0m6.007s 00:33:51.783 sys 0m0.916s 00:33:51.783 15:27:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:51.783 15:27:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:33:51.783 ************************************ 00:33:51.783 END TEST dd_malloc_copy 00:33:51.783 ************************************ 00:33:51.783 15:27:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:33:51.783 00:33:51.783 real 0m7.261s 00:33:51.783 user 0m6.066s 00:33:51.783 sys 0m1.021s 00:33:51.783 15:27:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:51.783 ************************************ 00:33:51.783 END TEST spdk_dd_malloc 00:33:51.783 ************************************ 00:33:51.783 15:27:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:33:51.783 15:27:47 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:33:51.783 15:27:47 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:33:51.783 15:27:47 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:51.783 15:27:47 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:51.783 15:27:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:33:51.783 ************************************ 00:33:51.783 START TEST spdk_dd_bdev_to_bdev 00:33:51.783 ************************************ 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:33:51.783 * Looking for test storage... 00:33:51.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:51.783 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # export PATH 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:33:51.784 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:33:52.043 [2024-07-23 15:27:47.253158] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:52.043 [2024-07-23 15:27:47.253359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126881 ] 00:33:52.043 [2024-07-23 15:27:47.404843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.043 [2024-07-23 15:27:47.452449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.562  Copying: 256/256 [MB] (average 1391 MBps) 00:33:52.562 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:33:52.562 ************************************ 00:33:52.562 START TEST dd_inflate_file 00:33:52.562 ************************************ 00:33:52.562 15:27:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:33:52.821 [2024-07-23 15:27:48.029104] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:52.821 [2024-07-23 15:27:48.029320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126888 ] 00:33:52.821 [2024-07-23 15:27:48.180634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.821 [2024-07-23 15:27:48.229157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.339  Copying: 64/64 [MB] (average 1306 MBps) 00:33:53.339 00:33:53.339 00:33:53.339 real 0m0.629s 00:33:53.339 user 0m0.269s 00:33:53.339 sys 0m0.246s 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:33:53.339 ************************************ 00:33:53.339 END TEST dd_inflate_file 00:33:53.339 ************************************ 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:33:53.339 ************************************ 00:33:53.339 START TEST dd_copy_to_out_bdev 00:33:53.339 ************************************ 00:33:53.339 15:27:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:33:53.339 { 00:33:53.339 "subsystems": [ 00:33:53.339 { 00:33:53.339 "subsystem": "bdev", 00:33:53.339 "config": [ 00:33:53.339 { 00:33:53.339 "params": { 00:33:53.339 "block_size": 4096, 00:33:53.339 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:53.339 "name": "aio1" 00:33:53.339 }, 00:33:53.339 "method": "bdev_aio_create" 00:33:53.339 }, 00:33:53.339 { 00:33:53.339 "params": { 00:33:53.339 "trtype": "pcie", 00:33:53.340 "traddr": "0000:00:10.0", 00:33:53.340 "name": "Nvme0" 00:33:53.340 }, 00:33:53.340 "method": "bdev_nvme_attach_controller" 00:33:53.340 }, 00:33:53.340 { 00:33:53.340 "method": "bdev_wait_for_examine" 00:33:53.340 } 00:33:53.340 ] 00:33:53.340 } 00:33:53.340 ] 00:33:53.340 } 00:33:53.340 [2024-07-23 15:27:48.731865] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:53.340 [2024-07-23 15:27:48.732054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126925 ] 00:33:53.598 [2024-07-23 15:27:48.881537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.599 [2024-07-23 15:27:48.927168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.974  Copying: 64/64 [MB] (average 73 MBps) 00:33:54.974 00:33:54.974 00:33:54.974 real 0m1.602s 00:33:54.974 user 0m1.226s 00:33:54.974 sys 0m0.268s 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:33:54.974 ************************************ 00:33:54.974 END TEST dd_copy_to_out_bdev 00:33:54.974 ************************************ 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:33:54.974 ************************************ 00:33:54.974 START TEST dd_offset_magic 00:33:54.974 ************************************ 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:33:54.974 15:27:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:33:54.974 { 00:33:54.974 "subsystems": [ 00:33:54.974 { 00:33:54.974 "subsystem": "bdev", 00:33:54.974 "config": [ 00:33:54.974 { 00:33:54.974 "params": { 00:33:54.974 "block_size": 4096, 00:33:54.974 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:54.974 "name": "aio1" 00:33:54.974 }, 00:33:54.974 "method": "bdev_aio_create" 00:33:54.974 }, 00:33:54.974 { 00:33:54.974 "params": { 00:33:54.974 "trtype": "pcie", 00:33:54.974 "traddr": "0000:00:10.0", 00:33:54.974 "name": "Nvme0" 00:33:54.974 }, 00:33:54.974 "method": "bdev_nvme_attach_controller" 00:33:54.974 }, 00:33:54.974 { 00:33:54.974 "method": "bdev_wait_for_examine" 00:33:54.974 } 00:33:54.974 ] 00:33:54.974 } 00:33:54.974 ] 00:33:54.974 } 00:33:54.974 [2024-07-23 15:27:50.402658] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:54.975 [2024-07-23 15:27:50.403086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126965 ] 00:33:55.232 [2024-07-23 15:27:50.555953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.232 [2024-07-23 15:27:50.601061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.366  Copying: 65/65 [MB] (average 143 MBps) 00:33:56.366 00:33:56.366 15:27:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:33:56.366 15:27:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:33:56.366 15:27:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:33:56.366 15:27:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:33:56.366 { 00:33:56.366 "subsystems": [ 00:33:56.366 { 00:33:56.366 "subsystem": "bdev", 00:33:56.366 "config": [ 00:33:56.366 { 00:33:56.366 "params": { 00:33:56.366 "block_size": 4096, 00:33:56.366 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:56.366 "name": "aio1" 00:33:56.366 }, 00:33:56.366 "method": "bdev_aio_create" 00:33:56.366 }, 00:33:56.366 { 00:33:56.366 "params": { 00:33:56.366 "trtype": "pcie", 00:33:56.366 "traddr": "0000:00:10.0", 00:33:56.366 "name": "Nvme0" 00:33:56.366 }, 00:33:56.366 "method": "bdev_nvme_attach_controller" 00:33:56.366 }, 00:33:56.366 { 00:33:56.366 "method": "bdev_wait_for_examine" 00:33:56.366 } 00:33:56.366 ] 00:33:56.366 } 00:33:56.366 ] 00:33:56.366 } 00:33:56.366 [2024-07-23 15:27:51.587169] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:56.366 [2024-07-23 15:27:51.587358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126986 ] 00:33:56.366 [2024-07-23 15:27:51.742252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.366 [2024-07-23 15:27:51.798955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.884  Copying: 1024/1024 [kB] (average 1000 MBps) 00:33:56.884 00:33:56.884 15:27:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:33:56.884 15:27:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:33:56.884 15:27:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:33:56.884 15:27:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:33:56.884 15:27:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:33:56.884 15:27:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:33:56.884 15:27:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:33:56.884 { 00:33:56.884 "subsystems": [ 00:33:56.884 { 00:33:56.884 "subsystem": "bdev", 00:33:56.884 "config": [ 00:33:56.884 { 00:33:56.884 "params": { 00:33:56.884 "block_size": 4096, 00:33:56.884 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:56.884 "name": "aio1" 00:33:56.884 }, 00:33:56.884 "method": "bdev_aio_create" 00:33:56.884 }, 00:33:56.884 { 00:33:56.884 "params": { 00:33:56.884 "trtype": "pcie", 00:33:56.884 "traddr": "0000:00:10.0", 00:33:56.884 "name": "Nvme0" 00:33:56.884 }, 00:33:56.884 "method": "bdev_nvme_attach_controller" 00:33:56.884 }, 00:33:56.884 { 00:33:56.884 "method": "bdev_wait_for_examine" 00:33:56.884 } 00:33:56.884 ] 00:33:56.884 } 00:33:56.884 ] 00:33:56.884 } 00:33:57.143 [2024-07-23 15:27:52.359044] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:57.143 [2024-07-23 15:27:52.359423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127006 ] 00:33:57.143 [2024-07-23 15:27:52.512288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.143 [2024-07-23 15:27:52.557355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.970  Copying: 65/65 [MB] (average 182 MBps) 00:33:57.970 00:33:57.970 15:27:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:33:57.970 15:27:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:33:57.970 15:27:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:33:57.970 15:27:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:33:57.970 { 00:33:57.970 "subsystems": [ 00:33:57.970 { 00:33:57.970 "subsystem": "bdev", 00:33:57.970 "config": [ 00:33:57.970 { 00:33:57.970 "params": { 00:33:57.970 "block_size": 4096, 00:33:57.970 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:57.970 "name": "aio1" 00:33:57.970 }, 00:33:57.970 "method": "bdev_aio_create" 00:33:57.970 }, 00:33:57.970 { 00:33:57.970 "params": { 00:33:57.970 "trtype": "pcie", 00:33:57.970 "traddr": "0000:00:10.0", 00:33:57.970 "name": "Nvme0" 00:33:57.970 }, 00:33:57.970 "method": "bdev_nvme_attach_controller" 00:33:57.970 }, 00:33:57.970 { 00:33:57.970 "method": "bdev_wait_for_examine" 00:33:57.970 } 00:33:57.970 ] 00:33:57.970 } 00:33:57.970 ] 00:33:57.970 } 00:33:57.970 [2024-07-23 15:27:53.403246] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:57.970 [2024-07-23 15:27:53.403642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127023 ] 00:33:58.228 [2024-07-23 15:27:53.561999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.228 [2024-07-23 15:27:53.610814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.747  Copying: 1024/1024 [kB] (average 1000 MBps) 00:33:58.747 00:33:58.747 ************************************ 00:33:58.747 END TEST dd_offset_magic 00:33:58.747 ************************************ 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:33:58.747 00:33:58.747 real 0m3.720s 00:33:58.747 user 0m1.603s 00:33:58.747 sys 0m1.074s 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:33:58.747 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:33:58.747 { 00:33:58.747 "subsystems": [ 00:33:58.747 { 00:33:58.747 "subsystem": "bdev", 00:33:58.747 "config": [ 00:33:58.747 { 00:33:58.747 "params": { 00:33:58.747 "block_size": 4096, 00:33:58.747 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:58.747 "name": "aio1" 00:33:58.747 }, 00:33:58.747 "method": "bdev_aio_create" 00:33:58.747 }, 00:33:58.747 { 00:33:58.747 "params": { 00:33:58.747 "trtype": "pcie", 00:33:58.747 "traddr": "0000:00:10.0", 00:33:58.747 "name": "Nvme0" 00:33:58.747 }, 00:33:58.747 "method": "bdev_nvme_attach_controller" 00:33:58.747 }, 00:33:58.747 { 00:33:58.747 "method": "bdev_wait_for_examine" 00:33:58.747 } 00:33:58.747 ] 00:33:58.747 } 00:33:58.747 ] 00:33:58.747 } 00:33:58.747 [2024-07-23 15:27:54.165542] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:58.747 [2024-07-23 15:27:54.165815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127053 ] 00:33:59.013 [2024-07-23 15:27:54.307232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.014 [2024-07-23 15:27:54.352948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.530  Copying: 5120/5120 [kB] (average 1250 MBps) 00:33:59.530 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:33:59.530 15:27:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:33:59.530 { 00:33:59.530 "subsystems": [ 00:33:59.530 { 00:33:59.530 "subsystem": "bdev", 00:33:59.530 "config": [ 00:33:59.530 { 00:33:59.530 "params": { 00:33:59.530 "block_size": 4096, 00:33:59.530 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:59.530 "name": "aio1" 00:33:59.530 }, 00:33:59.530 "method": "bdev_aio_create" 00:33:59.530 }, 00:33:59.530 { 00:33:59.531 "params": { 00:33:59.531 "trtype": "pcie", 00:33:59.531 "traddr": "0000:00:10.0", 00:33:59.531 "name": "Nvme0" 00:33:59.531 }, 00:33:59.531 "method": "bdev_nvme_attach_controller" 00:33:59.531 }, 00:33:59.531 { 00:33:59.531 "method": "bdev_wait_for_examine" 00:33:59.531 } 00:33:59.531 ] 00:33:59.531 } 00:33:59.531 ] 00:33:59.531 } 00:33:59.531 [2024-07-23 15:27:54.843439] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:33:59.531 [2024-07-23 15:27:54.843651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127070 ] 00:33:59.789 [2024-07-23 15:27:54.995227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.789 [2024-07-23 15:27:55.044391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.307  Copying: 5120/5120 [kB] (average 227 MBps) 00:34:00.307 00:34:00.307 15:27:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:34:00.307 ************************************ 00:34:00.307 END TEST spdk_dd_bdev_to_bdev 00:34:00.307 ************************************ 00:34:00.307 00:34:00.307 real 0m8.483s 00:34:00.307 user 0m4.222s 00:34:00.307 sys 0m2.665s 00:34:00.307 15:27:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:00.307 15:27:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:34:00.307 15:27:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:34:00.307 15:27:55 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:34:00.307 15:27:55 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:34:00.307 15:27:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:00.307 15:27:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:00.307 15:27:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:34:00.307 ************************************ 00:34:00.307 START TEST spdk_dd_sparse 00:34:00.307 ************************************ 00:34:00.307 15:27:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:34:00.307 * Looking for test storage... 00:34:00.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:34:00.307 15:27:55 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:00.307 15:27:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.307 15:27:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.307 15:27:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.307 15:27:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:00.307 15:27:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # export PATH 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:34:00.308 1+0 records in 00:34:00.308 1+0 records out 00:34:00.308 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00791926 s, 530 MB/s 00:34:00.308 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:34:00.567 1+0 records in 00:34:00.567 1+0 records out 00:34:00.567 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00659254 s, 636 MB/s 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:34:00.567 1+0 records in 00:34:00.567 1+0 records out 00:34:00.567 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0111174 s, 377 MB/s 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:34:00.567 ************************************ 00:34:00.567 START TEST dd_sparse_file_to_file 00:34:00.567 ************************************ 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:34:00.567 15:27:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:34:00.567 { 00:34:00.567 "subsystems": [ 00:34:00.567 { 00:34:00.567 "subsystem": "bdev", 00:34:00.567 "config": [ 00:34:00.567 { 00:34:00.567 "params": { 00:34:00.567 "block_size": 4096, 00:34:00.567 "filename": "dd_sparse_aio_disk", 00:34:00.567 "name": "dd_aio" 00:34:00.567 }, 00:34:00.567 "method": "bdev_aio_create" 00:34:00.567 }, 00:34:00.567 { 00:34:00.567 "params": { 00:34:00.567 "lvs_name": "dd_lvstore", 00:34:00.567 "bdev_name": "dd_aio" 00:34:00.567 }, 00:34:00.567 "method": "bdev_lvol_create_lvstore" 00:34:00.567 }, 00:34:00.567 { 00:34:00.567 "method": "bdev_wait_for_examine" 00:34:00.567 } 00:34:00.567 ] 00:34:00.567 } 00:34:00.567 ] 00:34:00.567 } 00:34:00.567 [2024-07-23 15:27:55.843439] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:00.567 [2024-07-23 15:27:55.843654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127138 ] 00:34:00.567 [2024-07-23 15:27:55.995301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.826 [2024-07-23 15:27:56.042304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.085  Copying: 12/36 [MB] (average 1090 MBps) 00:34:01.085 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:34:01.085 00:34:01.085 real 0m0.707s 00:34:01.085 user 0m0.318s 00:34:01.085 sys 0m0.264s 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:01.085 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:34:01.085 ************************************ 00:34:01.085 END TEST dd_sparse_file_to_file 00:34:01.085 ************************************ 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:34:01.344 ************************************ 00:34:01.344 START TEST dd_sparse_file_to_bdev 00:34:01.344 ************************************ 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:34:01.344 15:27:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:34:01.344 { 00:34:01.344 "subsystems": [ 00:34:01.344 { 00:34:01.344 "subsystem": "bdev", 00:34:01.344 "config": [ 00:34:01.344 { 00:34:01.344 "params": { 00:34:01.344 "block_size": 4096, 00:34:01.344 "filename": "dd_sparse_aio_disk", 00:34:01.344 "name": "dd_aio" 00:34:01.344 }, 00:34:01.344 "method": "bdev_aio_create" 00:34:01.344 }, 00:34:01.344 { 00:34:01.344 "params": { 00:34:01.344 "lvs_name": "dd_lvstore", 00:34:01.344 "lvol_name": "dd_lvol", 00:34:01.344 "size_in_mib": 36, 00:34:01.344 "thin_provision": true 00:34:01.344 }, 00:34:01.344 "method": "bdev_lvol_create" 00:34:01.344 }, 00:34:01.344 { 00:34:01.344 "method": "bdev_wait_for_examine" 00:34:01.344 } 00:34:01.344 ] 00:34:01.344 } 00:34:01.344 ] 00:34:01.344 } 00:34:01.344 [2024-07-23 15:27:56.605907] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:01.344 [2024-07-23 15:27:56.606080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127184 ] 00:34:01.344 [2024-07-23 15:27:56.755646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.603 [2024-07-23 15:27:56.801439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.861  Copying: 12/36 [MB] (average 122 MBps) 00:34:01.861 00:34:01.861 00:34:01.861 real 0m0.730s 00:34:01.861 user 0m0.412s 00:34:01.861 sys 0m0.217s 00:34:01.861 ************************************ 00:34:01.861 END TEST dd_sparse_file_to_bdev 00:34:01.861 ************************************ 00:34:01.861 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:01.861 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:34:02.121 ************************************ 00:34:02.121 START TEST dd_sparse_bdev_to_file 00:34:02.121 ************************************ 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:34:02.121 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:34:02.121 { 00:34:02.121 "subsystems": [ 00:34:02.121 { 00:34:02.121 "subsystem": "bdev", 00:34:02.121 "config": [ 00:34:02.121 { 00:34:02.121 "params": { 00:34:02.121 "block_size": 4096, 00:34:02.121 "filename": "dd_sparse_aio_disk", 00:34:02.121 "name": "dd_aio" 00:34:02.121 }, 00:34:02.121 "method": "bdev_aio_create" 00:34:02.121 }, 00:34:02.121 { 00:34:02.121 "method": "bdev_wait_for_examine" 00:34:02.121 } 00:34:02.121 ] 00:34:02.121 } 00:34:02.121 ] 00:34:02.121 } 00:34:02.121 [2024-07-23 15:27:57.377111] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:02.121 [2024-07-23 15:27:57.377259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127217 ] 00:34:02.121 [2024-07-23 15:27:57.515580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.379 [2024-07-23 15:27:57.564865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.637  Copying: 12/36 [MB] (average 1090 MBps) 00:34:02.637 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:34:02.637 00:34:02.637 real 0m0.645s 00:34:02.637 user 0m0.310s 00:34:02.637 sys 0m0.226s 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:02.637 15:27:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:34:02.637 ************************************ 00:34:02.637 END TEST dd_sparse_bdev_to_file 00:34:02.637 ************************************ 00:34:02.637 15:27:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:34:02.637 15:27:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:34:02.637 15:27:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:34:02.637 15:27:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:34:02.637 15:27:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:34:02.637 15:27:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:34:02.637 00:34:02.637 real 0m2.439s 00:34:02.637 user 0m1.149s 00:34:02.637 sys 0m0.960s 00:34:02.637 15:27:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:02.637 15:27:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:34:02.637 ************************************ 00:34:02.637 END TEST spdk_dd_sparse 00:34:02.637 ************************************ 00:34:02.895 15:27:58 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:34:02.895 15:27:58 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:34:02.895 15:27:58 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:02.895 15:27:58 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:02.895 15:27:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:34:02.895 ************************************ 00:34:02.895 START TEST spdk_dd_negative 00:34:02.895 ************************************ 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:34:02.895 * Looking for test storage... 00:34:02.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # export PATH 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:02.895 ************************************ 00:34:02.895 START TEST dd_invalid_arguments 00:34:02.895 ************************************ 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:34:02.895 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:02.896 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:34:02.896 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:34:02.896 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:34:02.896 00:34:02.896 CPU options: 00:34:02.896 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:34:02.896 (like [0,1,10]) 00:34:02.896 --lcores lcore to CPU mapping list. The list is in the format: 00:34:02.896 [<,lcores[@CPUs]>...] 00:34:02.896 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:34:02.896 Within the group, '-' is used for range separator, 00:34:02.896 ',' is used for single number separator. 00:34:02.896 '( )' can be omitted for single element group, 00:34:02.896 '@' can be omitted if cpus and lcores have the same value 00:34:02.896 --disable-cpumask-locks Disable CPU core lock files. 00:34:02.896 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:34:02.896 pollers in the app support interrupt mode) 00:34:02.896 -p, --main-core main (primary) core for DPDK 00:34:02.896 00:34:02.896 Configuration options: 00:34:02.896 -c, --config, --json JSON config file 00:34:02.896 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:34:02.896 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:34:02.896 --wait-for-rpc wait for RPCs to initialize subsystems 00:34:02.896 --rpcs-allowed comma-separated list of permitted RPCS 00:34:02.896 --json-ignore-init-errors don't exit on invalid config entry 00:34:02.896 00:34:02.896 Memory options: 00:34:02.896 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:34:02.896 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:34:02.896 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:34:02.896 -R, --huge-unlink unlink huge files after initialization 00:34:02.896 -n, --mem-channels number of memory channels used for DPDK 00:34:02.896 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:34:02.896 --msg-mempool-size global message memory pool size in count (default: 262143) 00:34:02.896 --no-huge run without using hugepages 00:34:02.896 -i, --shm-id shared memory ID (optional) 00:34:02.896 -g, --single-file-segments force creating just one hugetlbfs file 00:34:02.896 00:34:02.896 PCI options: 00:34:02.896 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:34:02.896 -B, --pci-blocked pci addr to block (can be used more than once) 00:34:02.896 -u, --no-pci disable PCI access 00:34:02.896 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:34:02.896 00:34:02.896 Log options: 00:34:02.896 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:34:02.896 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:34:02.896 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:34:02.896 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:34:02.896 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:34:02.896 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:34:02.896 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:34:02.896 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:34:02.896 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:34:02.896 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:34:02.896 virtio_vfio_user, vmd) 00:34:02.896 --silence-noticelog disable notice level logging to stderr 00:34:02.896 00:34:02.896 Trace options: 00:34:02.896 --num-trace-entries number of trace entries for each core, must be power of 2, 00:34:02.896 setting 0 to disable trace (default 32768) 00:34:02.896 Tracepoints vary in size and can use more than one trace entry. 00:34:02.896 -e, --tpoint-group [:] 00:34:02.896 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:34:02.896 [2024-07-23 15:27:58.302211] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:34:03.155 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:34:03.155 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:34:03.155 a tracepoint group. First tpoint inside a group can be enabled by 00:34:03.155 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:34:03.155 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:34:03.155 in /include/spdk_internal/trace_defs.h 00:34:03.155 00:34:03.155 Other options: 00:34:03.155 -h, --help show this usage 00:34:03.155 -v, --version print SPDK version 00:34:03.155 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:34:03.155 --env-context Opaque context for use of the env implementation 00:34:03.155 00:34:03.155 Application specific: 00:34:03.155 [--------- DD Options ---------] 00:34:03.155 --if Input file. Must specify either --if or --ib. 00:34:03.155 --ib Input bdev. Must specifier either --if or --ib 00:34:03.155 --of Output file. Must specify either --of or --ob. 00:34:03.155 --ob Output bdev. Must specify either --of or --ob. 00:34:03.155 --iflag Input file flags. 00:34:03.155 --oflag Output file flags. 00:34:03.155 --bs I/O unit size (default: 4096) 00:34:03.155 --qd Queue depth (default: 2) 00:34:03.155 --count I/O unit count. The number of I/O units to copy. (default: all) 00:34:03.155 --skip Skip this many I/O units at start of input. (default: 0) 00:34:03.155 --seek Skip this many I/O units at start of output. (default: 0) 00:34:03.155 --aio Force usage of AIO. (by default io_uring is used if available) 00:34:03.155 --sparse Enable hole skipping in input target 00:34:03.155 Available iflag and oflag values: 00:34:03.155 append - append mode 00:34:03.155 direct - use direct I/O for data 00:34:03.155 directory - fail unless a directory 00:34:03.155 dsync - use synchronized I/O for data 00:34:03.155 noatime - do not update access time 00:34:03.155 noctty - do not assign controlling terminal from file 00:34:03.155 nofollow - do not follow symlinks 00:34:03.155 nonblock - use non-blocking I/O 00:34:03.155 sync - use synchronized I/O for data and metadata 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.155 00:34:03.155 real 0m0.132s 00:34:03.155 user 0m0.063s 00:34:03.155 sys 0m0.070s 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:34:03.155 ************************************ 00:34:03.155 END TEST dd_invalid_arguments 00:34:03.155 ************************************ 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.155 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:03.155 ************************************ 00:34:03.155 START TEST dd_double_input 00:34:03.155 ************************************ 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:34:03.156 [2024-07-23 15:27:58.492282] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.156 00:34:03.156 real 0m0.136s 00:34:03.156 user 0m0.056s 00:34:03.156 sys 0m0.080s 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.156 15:27:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:34:03.156 ************************************ 00:34:03.156 END TEST dd_double_input 00:34:03.156 ************************************ 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:03.415 ************************************ 00:34:03.415 START TEST dd_double_output 00:34:03.415 ************************************ 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:34:03.415 [2024-07-23 15:27:58.687436] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.415 00:34:03.415 real 0m0.129s 00:34:03.415 user 0m0.070s 00:34:03.415 sys 0m0.060s 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.415 ************************************ 00:34:03.415 END TEST dd_double_output 00:34:03.415 ************************************ 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:03.415 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:03.416 ************************************ 00:34:03.416 START TEST dd_no_input 00:34:03.416 ************************************ 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:03.416 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:34:03.675 [2024-07-23 15:27:58.875093] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.675 00:34:03.675 real 0m0.131s 00:34:03.675 user 0m0.071s 00:34:03.675 sys 0m0.061s 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:34:03.675 ************************************ 00:34:03.675 END TEST dd_no_input 00:34:03.675 ************************************ 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:03.675 ************************************ 00:34:03.675 START TEST dd_no_output 00:34:03.675 ************************************ 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.675 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.675 15:27:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.675 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.675 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:03.675 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:34:03.675 [2024-07-23 15:27:59.061169] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.934 00:34:03.934 real 0m0.131s 00:34:03.934 user 0m0.064s 00:34:03.934 sys 0m0.068s 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:34:03.934 ************************************ 00:34:03.934 END TEST dd_no_output 00:34:03.934 ************************************ 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:03.934 ************************************ 00:34:03.934 START TEST dd_wrong_blocksize 00:34:03.934 ************************************ 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:34:03.934 [2024-07-23 15:27:59.252093] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.934 00:34:03.934 real 0m0.135s 00:34:03.934 user 0m0.072s 00:34:03.934 sys 0m0.064s 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.934 15:27:59 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:34:03.934 ************************************ 00:34:03.934 END TEST dd_wrong_blocksize 00:34:03.934 ************************************ 00:34:04.193 15:27:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:04.193 15:27:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:34:04.193 15:27:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:04.193 15:27:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.193 15:27:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:04.193 ************************************ 00:34:04.193 START TEST dd_smaller_blocksize 00:34:04.193 ************************************ 00:34:04.193 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:04.194 15:27:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:34:04.194 [2024-07-23 15:27:59.451429] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:04.194 [2024-07-23 15:27:59.451613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127446 ] 00:34:04.194 [2024-07-23 15:27:59.607059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.452 [2024-07-23 15:27:59.661994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.710 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:34:04.710 [2024-07-23 15:28:00.073477] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:34:04.710 [2024-07-23 15:28:00.073551] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:04.970 [2024-07-23 15:28:00.182494] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:04.970 00:34:04.970 real 0m0.919s 00:34:04.970 user 0m0.361s 00:34:04.970 sys 0m0.457s 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:04.970 ************************************ 00:34:04.970 END TEST dd_smaller_blocksize 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:34:04.970 ************************************ 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:04.970 ************************************ 00:34:04.970 START TEST dd_invalid_count 00:34:04.970 ************************************ 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:04.970 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:34:05.230 [2024-07-23 15:28:00.412125] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:05.230 00:34:05.230 real 0m0.108s 00:34:05.230 user 0m0.054s 00:34:05.230 sys 0m0.055s 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:34:05.230 ************************************ 00:34:05.230 END TEST dd_invalid_count 00:34:05.230 ************************************ 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:05.230 ************************************ 00:34:05.230 START TEST dd_invalid_oflag 00:34:05.230 ************************************ 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:05.230 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:34:05.230 [2024-07-23 15:28:00.599020] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:05.489 00:34:05.489 real 0m0.137s 00:34:05.489 user 0m0.063s 00:34:05.489 sys 0m0.074s 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:34:05.489 ************************************ 00:34:05.489 END TEST dd_invalid_oflag 00:34:05.489 ************************************ 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:05.489 ************************************ 00:34:05.489 START TEST dd_invalid_iflag 00:34:05.489 ************************************ 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:34:05.489 [2024-07-23 15:28:00.797992] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:05.489 00:34:05.489 real 0m0.131s 00:34:05.489 user 0m0.070s 00:34:05.489 sys 0m0.062s 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.489 15:28:00 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:34:05.489 ************************************ 00:34:05.489 END TEST dd_invalid_iflag 00:34:05.489 ************************************ 00:34:05.490 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:05.490 15:28:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:34:05.490 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:05.490 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.490 15:28:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:05.749 ************************************ 00:34:05.749 START TEST dd_unknown_flag 00:34:05.749 ************************************ 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:05.749 15:28:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:34:05.749 [2024-07-23 15:28:01.002049] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:05.749 [2024-07-23 15:28:01.002247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127542 ] 00:34:05.749 [2024-07-23 15:28:01.152447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.008 [2024-07-23 15:28:01.200364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.008 [2024-07-23 15:28:01.265920] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:34:06.008 [2024-07-23 15:28:01.265998] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:06.008  Copying: 0/0 [B] (average 0 Bps)[2024-07-23 15:28:01.266208] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:34:06.008 [2024-07-23 15:28:01.373346] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:34:06.266 00:34:06.266 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:06.266 00:34:06.266 real 0m0.582s 00:34:06.266 user 0m0.261s 00:34:06.266 sys 0m0.201s 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:34:06.266 ************************************ 00:34:06.266 END TEST dd_unknown_flag 00:34:06.266 ************************************ 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.266 15:28:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:06.266 ************************************ 00:34:06.267 START TEST dd_invalid_json 00:34:06.267 ************************************ 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:34:06.267 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:34:06.267 [2024-07-23 15:28:01.656343] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:06.267 [2024-07-23 15:28:01.656565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127575 ] 00:34:06.525 [2024-07-23 15:28:01.808630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.525 [2024-07-23 15:28:01.854109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.525 [2024-07-23 15:28:01.854213] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:34:06.525 [2024-07-23 15:28:01.854233] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:06.525 [2024-07-23 15:28:01.854260] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:06.525 [2024-07-23 15:28:01.854337] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:34:06.784 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:34:06.784 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:06.784 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:34:06.784 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:34:06.784 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:34:06.784 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:06.784 00:34:06.784 real 0m0.395s 00:34:06.784 user 0m0.173s 00:34:06.784 sys 0m0.124s 00:34:06.784 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:06.784 15:28:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:34:06.784 ************************************ 00:34:06.784 END TEST dd_invalid_json 00:34:06.784 ************************************ 00:34:06.784 15:28:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:34:06.784 00:34:06.784 real 0m3.914s 00:34:06.784 user 0m1.633s 00:34:06.784 sys 0m1.983s 00:34:06.784 15:28:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:06.784 ************************************ 00:34:06.784 END TEST spdk_dd_negative 00:34:06.784 15:28:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:34:06.784 ************************************ 00:34:06.784 15:28:02 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:34:06.784 00:34:06.784 real 1m3.685s 00:34:06.784 user 0m33.261s 00:34:06.784 sys 0m20.334s 00:34:06.784 15:28:02 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:06.784 15:28:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:34:06.784 ************************************ 00:34:06.784 END TEST spdk_dd 00:34:06.784 ************************************ 00:34:06.784 15:28:02 -- common/autotest_common.sh@1142 -- # return 0 00:34:06.784 15:28:02 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:34:06.784 15:28:02 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:34:06.784 15:28:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:06.784 15:28:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.784 15:28:02 -- common/autotest_common.sh@10 -- # set +x 00:34:06.784 ************************************ 00:34:06.784 START TEST blockdev_nvme 00:34:06.784 ************************************ 00:34:06.784 15:28:02 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:34:07.043 * Looking for test storage... 00:34:07.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:07.043 15:28:02 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=127652 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 127652 00:34:07.043 15:28:02 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 127652 ']' 00:34:07.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.043 15:28:02 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:07.043 15:28:02 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.043 15:28:02 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:07.043 15:28:02 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.043 15:28:02 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:07.043 15:28:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:07.043 [2024-07-23 15:28:02.340936] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:07.044 [2024-07-23 15:28:02.341146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127652 ] 00:34:07.303 [2024-07-23 15:28:02.498204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.303 [2024-07-23 15:28:02.556053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.871 15:28:03 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:07.871 15:28:03 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:34:07.871 15:28:03 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:34:07.871 15:28:03 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:34:07.871 15:28:03 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:34:07.871 15:28:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:34:07.871 15:28:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:07.871 15:28:03 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:34:07.871 15:28:03 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.871 15:28:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cfd04289-1ef5-4fe6-9b45-21a825bcd05b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cfd04289-1ef5-4fe6-9b45-21a825bcd05b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:34:08.130 15:28:03 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 127652 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 127652 ']' 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 127652 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127652 00:34:08.130 killing process with pid 127652 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127652' 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 127652 00:34:08.130 15:28:03 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 127652 00:34:08.706 15:28:03 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:08.706 15:28:03 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:34:08.706 15:28:03 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:34:08.706 15:28:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:08.706 15:28:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.706 ************************************ 00:34:08.706 START TEST bdev_hello_world 00:34:08.706 ************************************ 00:34:08.706 15:28:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:34:08.706 [2024-07-23 15:28:03.941088] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:08.706 [2024-07-23 15:28:03.942192] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127711 ] 00:34:08.706 [2024-07-23 15:28:04.094228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.979 [2024-07-23 15:28:04.142733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.979 [2024-07-23 15:28:04.339852] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:08.979 [2024-07-23 15:28:04.339907] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:34:08.979 [2024-07-23 15:28:04.339930] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:08.979 [2024-07-23 15:28:04.342182] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:08.979 [2024-07-23 15:28:04.342701] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:08.979 [2024-07-23 15:28:04.342735] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:08.979 [2024-07-23 15:28:04.342953] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:08.979 00:34:08.979 [2024-07-23 15:28:04.342999] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:09.237 00:34:09.237 real 0m0.713s 00:34:09.237 user 0m0.398s 00:34:09.237 sys 0m0.214s 00:34:09.237 15:28:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:09.237 15:28:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:09.237 ************************************ 00:34:09.237 END TEST bdev_hello_world 00:34:09.237 ************************************ 00:34:09.237 15:28:04 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:34:09.237 15:28:04 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:34:09.237 15:28:04 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:09.237 15:28:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:09.237 15:28:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:09.237 ************************************ 00:34:09.237 START TEST bdev_bounds 00:34:09.237 ************************************ 00:34:09.237 15:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:34:09.237 Process bdevio pid: 127742 00:34:09.237 15:28:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=127742 00:34:09.237 15:28:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:09.237 15:28:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 127742' 00:34:09.237 15:28:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 127742 00:34:09.237 15:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 127742 ']' 00:34:09.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.237 15:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.237 15:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:09.238 15:28:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:09.238 15:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.238 15:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:09.238 15:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:09.496 [2024-07-23 15:28:04.702630] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:09.496 [2024-07-23 15:28:04.702776] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127742 ] 00:34:09.496 [2024-07-23 15:28:04.842165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:09.496 [2024-07-23 15:28:04.894969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.496 [2024-07-23 15:28:04.894906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.496 [2024-07-23 15:28:04.895093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:10.432 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:10.432 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:34:10.432 15:28:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:10.432 I/O targets: 00:34:10.432 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:34:10.432 00:34:10.432 00:34:10.432 CUnit - A unit testing framework for C - Version 2.1-3 00:34:10.432 http://cunit.sourceforge.net/ 00:34:10.432 00:34:10.432 00:34:10.432 Suite: bdevio tests on: Nvme0n1 00:34:10.432 Test: blockdev write read block ...passed 00:34:10.432 Test: blockdev write zeroes read block ...passed 00:34:10.432 Test: blockdev write zeroes read no split ...passed 00:34:10.432 Test: blockdev write zeroes read split ...passed 00:34:10.432 Test: blockdev write zeroes read split partial ...passed 00:34:10.432 Test: blockdev reset ...[2024-07-23 15:28:05.691929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:34:10.432 passed 00:34:10.432 Test: blockdev write read 8 blocks ...[2024-07-23 15:28:05.694373] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:10.432 passed 00:34:10.432 Test: blockdev write read size > 128k ...passed 00:34:10.432 Test: blockdev write read invalid size ...passed 00:34:10.432 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:10.432 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:10.432 Test: blockdev write read max offset ...passed 00:34:10.432 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:10.432 Test: blockdev writev readv 8 blocks ...passed 00:34:10.432 Test: blockdev writev readv 30 x 1block ...passed 00:34:10.432 Test: blockdev writev readv block ...passed 00:34:10.432 Test: blockdev writev readv size > 128k ...passed 00:34:10.432 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:10.432 Test: blockdev comparev and writev ...[2024-07-23 15:28:05.700611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x307c0d000 len:0x1000 00:34:10.432 [2024-07-23 15:28:05.700673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:10.432 passed 00:34:10.432 Test: blockdev nvme passthru rw ...passed 00:34:10.432 Test: blockdev nvme passthru vendor specific ...passed 00:34:10.432 Test: blockdev nvme admin passthru ...[2024-07-23 15:28:05.701545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:10.432 [2024-07-23 15:28:05.701641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:10.432 passed 00:34:10.432 Test: blockdev copy ...passed 00:34:10.432 00:34:10.432 Run Summary: Type Total Ran Passed Failed Inactive 00:34:10.432 suites 1 1 n/a 0 0 00:34:10.432 tests 23 23 23 0 0 00:34:10.432 asserts 152 152 152 0 n/a 00:34:10.432 00:34:10.432 Elapsed time = 0.062 seconds 00:34:10.432 0 00:34:10.432 15:28:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 127742 00:34:10.432 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 127742 ']' 00:34:10.432 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 127742 00:34:10.432 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:34:10.432 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:10.433 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127742 00:34:10.433 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:10.433 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:10.433 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127742' 00:34:10.433 killing process with pid 127742 00:34:10.433 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 127742 00:34:10.433 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 127742 00:34:10.691 15:28:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:34:10.691 00:34:10.691 real 0m1.333s 00:34:10.691 user 0m3.384s 00:34:10.691 sys 0m0.316s 00:34:10.691 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:10.691 15:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:10.691 ************************************ 00:34:10.691 END TEST bdev_bounds 00:34:10.691 ************************************ 00:34:10.691 15:28:06 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:34:10.691 15:28:06 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:34:10.691 15:28:06 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:10.691 15:28:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.691 15:28:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:10.691 ************************************ 00:34:10.691 START TEST bdev_nbd 00:34:10.691 ************************************ 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1') 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1') 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=127785 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 127785 /var/tmp/spdk-nbd.sock 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 127785 ']' 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:10.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:10.691 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:10.692 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:10.692 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:10.692 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:10.692 [2024-07-23 15:28:06.109508] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:10.692 [2024-07-23 15:28:06.109650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.950 [2024-07-23 15:28:06.242665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.950 [2024-07-23 15:28:06.289534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:34:11.885 15:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:11.885 1+0 records in 00:34:11.885 1+0 records out 00:34:11.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477784 s, 8.6 MB/s 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:11.885 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:11.886 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:11.886 15:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:11.886 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:11.886 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:34:11.886 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:12.144 { 00:34:12.144 "nbd_device": "/dev/nbd0", 00:34:12.144 "bdev_name": "Nvme0n1" 00:34:12.144 } 00:34:12.144 ]' 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:12.144 { 00:34:12.144 "nbd_device": "/dev/nbd0", 00:34:12.144 "bdev_name": "Nvme0n1" 00:34:12.144 } 00:34:12.144 ]' 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.144 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:12.403 15:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:34:12.662 /dev/nbd0 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:12.662 1+0 records in 00:34:12.662 1+0 records out 00:34:12.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508332 s, 8.1 MB/s 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.662 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:12.921 { 00:34:12.921 "nbd_device": "/dev/nbd0", 00:34:12.921 "bdev_name": "Nvme0n1" 00:34:12.921 } 00:34:12.921 ]' 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:12.921 { 00:34:12.921 "nbd_device": "/dev/nbd0", 00:34:12.921 "bdev_name": "Nvme0n1" 00:34:12.921 } 00:34:12.921 ]' 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:12.921 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:12.921 256+0 records in 00:34:12.921 256+0 records out 00:34:12.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00852441 s, 123 MB/s 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:12.922 256+0 records in 00:34:12.922 256+0 records out 00:34:12.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0548323 s, 19.1 MB/s 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.922 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:13.180 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:13.438 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:13.438 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:34:13.439 15:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:13.697 malloc_lvol_verify 00:34:13.697 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:13.954 3dcdeb17-4acd-4aa3-bb13-7b84278d4835 00:34:13.954 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:14.212 ddecb822-ed40-4216-850c-b80760bff907 00:34:14.212 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:14.470 /dev/nbd0 00:34:14.470 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:34:14.470 mke2fs 1.47.0 (5-Feb-2023) 00:34:14.470 00:34:14.470 Filesystem too small for a journal 00:34:14.470 Discarding device blocks: 0/1024 done 00:34:14.470 Creating filesystem with 1024 4k blocks and 1024 inodes 00:34:14.470 00:34:14.470 Allocating group tables: 0/1 done 00:34:14.470 Writing inode tables: 0/1 done 00:34:14.470 Writing superblocks and filesystem accounting information: 0/1 done 00:34:14.470 00:34:14.470 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:34:14.470 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:14.470 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:14.470 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:14.470 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:14.470 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:14.471 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:14.471 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:14.729 15:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 127785 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 127785 ']' 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 127785 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127785 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:14.729 killing process with pid 127785 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127785' 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 127785 00:34:14.729 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 127785 00:34:14.988 15:28:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:34:14.988 00:34:14.988 real 0m4.249s 00:34:14.988 user 0m6.268s 00:34:14.988 sys 0m1.188s 00:34:14.988 ************************************ 00:34:14.988 END TEST bdev_nbd 00:34:14.988 ************************************ 00:34:14.988 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.988 15:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:14.988 15:28:10 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:34:14.988 15:28:10 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:34:14.988 15:28:10 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:34:14.988 skipping fio tests on NVMe due to multi-ns failures. 00:34:14.988 15:28:10 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:34:14.988 15:28:10 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:14.988 15:28:10 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:14.988 15:28:10 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:14.988 15:28:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.988 15:28:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:14.988 ************************************ 00:34:14.988 START TEST bdev_verify 00:34:14.988 ************************************ 00:34:14.988 15:28:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:14.988 [2024-07-23 15:28:10.410599] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:14.988 [2024-07-23 15:28:10.410744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127950 ] 00:34:15.246 [2024-07-23 15:28:10.551812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:15.246 [2024-07-23 15:28:10.602785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.246 [2024-07-23 15:28:10.602899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.504 Running I/O for 5 seconds... 00:34:20.773 00:34:20.773 Latency(us) 00:34:20.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.773 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:20.773 Verification LBA range: start 0x0 length 0xa0000 00:34:20.773 Nvme0n1 : 5.01 10336.22 40.38 0.00 0.00 12317.90 756.78 19348.72 00:34:20.773 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:20.773 Verification LBA range: start 0xa0000 length 0xa0000 00:34:20.773 Nvme0n1 : 5.01 10549.13 41.21 0.00 0.00 12065.86 862.11 19473.55 00:34:20.773 =================================================================================================================== 00:34:20.773 Total : 20885.34 81.58 0.00 0.00 12190.57 756.78 19473.55 00:34:20.773 00:34:20.773 real 0m5.812s 00:34:20.773 user 0m10.940s 00:34:20.773 sys 0m0.213s 00:34:20.773 15:28:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:20.773 ************************************ 00:34:20.773 END TEST bdev_verify 00:34:20.773 ************************************ 00:34:20.773 15:28:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:21.031 15:28:16 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:34:21.031 15:28:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:21.031 15:28:16 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:21.031 15:28:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:21.031 15:28:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:21.031 ************************************ 00:34:21.031 START TEST bdev_verify_big_io 00:34:21.031 ************************************ 00:34:21.031 15:28:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:21.031 [2024-07-23 15:28:16.282164] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:21.031 [2024-07-23 15:28:16.282317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128032 ] 00:34:21.031 [2024-07-23 15:28:16.426550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:21.288 [2024-07-23 15:28:16.484776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.288 [2024-07-23 15:28:16.484870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.288 Running I/O for 5 seconds... 00:34:26.553 00:34:26.553 Latency(us) 00:34:26.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.553 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:26.553 Verification LBA range: start 0x0 length 0xa000 00:34:26.553 Nvme0n1 : 5.07 1010.59 63.16 0.00 0.00 124091.36 748.98 135815.56 00:34:26.553 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:26.553 Verification LBA range: start 0xa000 length 0xa000 00:34:26.553 Nvme0n1 : 5.06 968.48 60.53 0.00 0.00 129344.10 760.69 184749.10 00:34:26.553 =================================================================================================================== 00:34:26.553 Total : 1979.06 123.69 0.00 0.00 126660.60 748.98 184749.10 00:34:27.120 00:34:27.120 real 0m6.128s 00:34:27.120 user 0m11.578s 00:34:27.120 sys 0m0.207s 00:34:27.120 15:28:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:27.120 15:28:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:27.120 ************************************ 00:34:27.120 END TEST bdev_verify_big_io 00:34:27.120 ************************************ 00:34:27.120 15:28:22 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:34:27.120 15:28:22 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:27.120 15:28:22 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:27.120 15:28:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:27.120 15:28:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:27.120 ************************************ 00:34:27.120 START TEST bdev_write_zeroes 00:34:27.120 ************************************ 00:34:27.120 15:28:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:27.120 [2024-07-23 15:28:22.489870] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:27.120 [2024-07-23 15:28:22.490079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128109 ] 00:34:27.378 [2024-07-23 15:28:22.637853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.378 [2024-07-23 15:28:22.682039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.635 Running I/O for 1 seconds... 00:34:28.566 00:34:28.566 Latency(us) 00:34:28.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.566 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:28.566 Nvme0n1 : 1.00 66238.90 258.75 0.00 0.00 1928.03 553.94 13918.60 00:34:28.566 =================================================================================================================== 00:34:28.566 Total : 66238.90 258.75 0.00 0.00 1928.03 553.94 13918.60 00:34:28.825 00:34:28.825 real 0m1.698s 00:34:28.825 user 0m1.404s 00:34:28.825 sys 0m0.194s 00:34:28.825 15:28:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:28.825 15:28:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:28.825 ************************************ 00:34:28.825 END TEST bdev_write_zeroes 00:34:28.825 ************************************ 00:34:28.825 15:28:24 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:34:28.825 15:28:24 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:28.825 15:28:24 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:28.825 15:28:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:28.825 15:28:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:28.825 ************************************ 00:34:28.825 START TEST bdev_json_nonenclosed 00:34:28.825 ************************************ 00:34:28.825 15:28:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:28.825 [2024-07-23 15:28:24.228990] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:28.825 [2024-07-23 15:28:24.229133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128150 ] 00:34:29.083 [2024-07-23 15:28:24.365216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.083 [2024-07-23 15:28:24.408783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.083 [2024-07-23 15:28:24.408894] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:29.083 [2024-07-23 15:28:24.408929] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:29.083 [2024-07-23 15:28:24.408942] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:29.083 00:34:29.083 real 0m0.340s 00:34:29.083 user 0m0.149s 00:34:29.083 sys 0m0.091s 00:34:29.083 15:28:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:34:29.083 15:28:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:29.083 15:28:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:29.083 ************************************ 00:34:29.083 END TEST bdev_json_nonenclosed 00:34:29.083 ************************************ 00:34:29.341 15:28:24 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:34:29.341 15:28:24 blockdev_nvme -- bdev/blockdev.sh@781 -- # true 00:34:29.341 15:28:24 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:29.341 15:28:24 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:29.341 15:28:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.341 15:28:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:29.341 ************************************ 00:34:29.341 START TEST bdev_json_nonarray 00:34:29.341 ************************************ 00:34:29.341 15:28:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:29.341 [2024-07-23 15:28:24.625444] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:29.341 [2024-07-23 15:28:24.625583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128171 ] 00:34:29.341 [2024-07-23 15:28:24.768319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.598 [2024-07-23 15:28:24.812649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.598 [2024-07-23 15:28:24.812767] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:29.598 [2024-07-23 15:28:24.812794] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:29.598 [2024-07-23 15:28:24.812817] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:29.598 00:34:29.598 real 0m0.353s 00:34:29.598 user 0m0.150s 00:34:29.598 sys 0m0.102s 00:34:29.598 15:28:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:34:29.598 15:28:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:29.598 15:28:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:29.598 ************************************ 00:34:29.598 END TEST bdev_json_nonarray 00:34:29.598 ************************************ 00:34:29.598 15:28:24 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@784 -- # true 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:34:29.598 15:28:24 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:34:29.598 00:34:29.598 real 0m22.848s 00:34:29.598 user 0m36.058s 00:34:29.598 sys 0m3.489s 00:34:29.598 15:28:24 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:29.598 15:28:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:29.598 ************************************ 00:34:29.598 END TEST blockdev_nvme 00:34:29.598 ************************************ 00:34:29.855 15:28:25 -- common/autotest_common.sh@1142 -- # return 0 00:34:29.855 15:28:25 -- spdk/autotest.sh@213 -- # uname -s 00:34:29.855 15:28:25 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:34:29.855 15:28:25 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:34:29.855 15:28:25 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:29.855 15:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.855 15:28:25 -- common/autotest_common.sh@10 -- # set +x 00:34:29.855 ************************************ 00:34:29.855 START TEST blockdev_nvme_gpt 00:34:29.855 ************************************ 00:34:29.855 15:28:25 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:34:29.855 * Looking for test storage... 00:34:29.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=128236 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 128236 00:34:29.855 15:28:25 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 128236 ']' 00:34:29.855 15:28:25 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.855 15:28:25 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:29.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.855 15:28:25 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.855 15:28:25 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:29.855 15:28:25 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:29.855 15:28:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:29.855 [2024-07-23 15:28:25.242569] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:29.855 [2024-07-23 15:28:25.242802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128236 ] 00:34:30.113 [2024-07-23 15:28:25.397110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.114 [2024-07-23 15:28:25.444493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.695 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:30.695 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:34:30.695 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:34:30.695 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:34:30.695 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:31.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:34:31.288 Waiting for block devices as requested 00:34:31.288 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:34:31.288 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:34:31.288 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:34:31.288 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:34:31.288 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:34:31.288 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:34:31.288 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:31.288 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:31.288 15:28:26 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1') 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:34:31.288 BYT; 00:34:31.288 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:34:31.288 BYT; 00:34:31.288 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:34:31.288 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:34:31.547 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:34:31.548 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:34:31.548 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:34:31.548 15:28:26 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:34:31.548 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:34:31.548 15:28:26 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:34:32.484 The operation has completed successfully. 00:34:32.484 15:28:27 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:34:33.861 The operation has completed successfully. 00:34:33.861 15:28:28 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:34.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:34:34.120 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:34.688 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:34:34.688 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.688 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:34.688 [] 00:34:34.688 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.688 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:34:34.688 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:34:34.688 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:34:34.688 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:34.947 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:34:34.947 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.947 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:34:34.948 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:34:34.948 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:34:35.207 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:34:35.207 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1p1 00:34:35.207 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:34:35.207 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 128236 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 128236 ']' 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 128236 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128236 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:35.207 killing process with pid 128236 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128236' 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 128236 00:34:35.207 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 128236 00:34:35.466 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:35.466 15:28:30 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:34:35.466 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:34:35.466 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:35.467 15:28:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:35.467 ************************************ 00:34:35.467 START TEST bdev_hello_world 00:34:35.467 ************************************ 00:34:35.467 15:28:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:34:35.467 [2024-07-23 15:28:30.891009] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:35.467 [2024-07-23 15:28:30.891200] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128618 ] 00:34:35.725 [2024-07-23 15:28:31.043815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.725 [2024-07-23 15:28:31.089765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.984 [2024-07-23 15:28:31.285104] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:35.984 [2024-07-23 15:28:31.285166] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:34:35.984 [2024-07-23 15:28:31.285187] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:35.984 [2024-07-23 15:28:31.287469] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:35.984 [2024-07-23 15:28:31.288065] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:35.984 [2024-07-23 15:28:31.288102] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:35.984 [2024-07-23 15:28:31.288333] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:35.984 00:34:35.984 [2024-07-23 15:28:31.288363] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:36.243 00:34:36.243 real 0m0.702s 00:34:36.243 user 0m0.404s 00:34:36.243 sys 0m0.198s 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:36.243 ************************************ 00:34:36.243 END TEST bdev_hello_world 00:34:36.243 ************************************ 00:34:36.243 15:28:31 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:34:36.243 15:28:31 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:34:36.243 15:28:31 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:36.243 15:28:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.243 15:28:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:36.243 ************************************ 00:34:36.243 START TEST bdev_bounds 00:34:36.243 ************************************ 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=128645 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:36.243 Process bdevio pid: 128645 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 128645' 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 128645 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 128645 ']' 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:36.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:36.243 15:28:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:36.243 [2024-07-23 15:28:31.652280] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:36.243 [2024-07-23 15:28:31.652478] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128645 ] 00:34:36.503 [2024-07-23 15:28:31.804808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:36.503 [2024-07-23 15:28:31.852742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.503 [2024-07-23 15:28:31.852821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:36.503 [2024-07-23 15:28:31.852830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:37.441 I/O targets: 00:34:37.441 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:34:37.441 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:34:37.441 00:34:37.441 00:34:37.441 CUnit - A unit testing framework for C - Version 2.1-3 00:34:37.441 http://cunit.sourceforge.net/ 00:34:37.441 00:34:37.441 00:34:37.441 Suite: bdevio tests on: Nvme0n1p2 00:34:37.441 Test: blockdev write read block ...passed 00:34:37.441 Test: blockdev write zeroes read block ...passed 00:34:37.441 Test: blockdev write zeroes read no split ...passed 00:34:37.441 Test: blockdev write zeroes read split ...passed 00:34:37.441 Test: blockdev write zeroes read split partial ...passed 00:34:37.441 Test: blockdev reset ...[2024-07-23 15:28:32.710434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:34:37.441 passed 00:34:37.441 Test: blockdev write read 8 blocks ...[2024-07-23 15:28:32.713151] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:37.441 passed 00:34:37.441 Test: blockdev write read size > 128k ...passed 00:34:37.441 Test: blockdev write read invalid size ...passed 00:34:37.441 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:37.441 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:37.441 Test: blockdev write read max offset ...passed 00:34:37.441 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:37.441 Test: blockdev writev readv 8 blocks ...passed 00:34:37.441 Test: blockdev writev readv 30 x 1block ...passed 00:34:37.441 Test: blockdev writev readv block ...passed 00:34:37.441 Test: blockdev writev readv size > 128k ...passed 00:34:37.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:37.441 Test: blockdev comparev and writev ...[2024-07-23 15:28:32.720921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x33180d000 len:0x1000 00:34:37.441 [2024-07-23 15:28:32.720994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:37.441 passed 00:34:37.441 Test: blockdev nvme passthru rw ...passed 00:34:37.441 Test: blockdev nvme passthru vendor specific ...passed 00:34:37.441 Test: blockdev nvme admin passthru ...passed 00:34:37.441 Test: blockdev copy ...passed 00:34:37.441 Suite: bdevio tests on: Nvme0n1p1 00:34:37.441 Test: blockdev write read block ...passed 00:34:37.441 Test: blockdev write zeroes read block ...passed 00:34:37.441 Test: blockdev write zeroes read no split ...passed 00:34:37.441 Test: blockdev write zeroes read split ...passed 00:34:37.441 Test: blockdev write zeroes read split partial ...passed 00:34:37.441 Test: blockdev reset ...[2024-07-23 15:28:32.737966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:34:37.441 passed 00:34:37.441 Test: blockdev write read 8 blocks ...[2024-07-23 15:28:32.740521] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:37.441 passed 00:34:37.441 Test: blockdev write read size > 128k ...passed 00:34:37.441 Test: blockdev write read invalid size ...passed 00:34:37.441 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:37.441 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:37.441 Test: blockdev write read max offset ...passed 00:34:37.441 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:37.441 Test: blockdev writev readv 8 blocks ...passed 00:34:37.441 Test: blockdev writev readv 30 x 1block ...passed 00:34:37.441 Test: blockdev writev readv block ...passed 00:34:37.441 Test: blockdev writev readv size > 128k ...passed 00:34:37.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:37.441 Test: blockdev comparev and writev ...[2024-07-23 15:28:32.748316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x331809000 len:0x1000 00:34:37.441 [2024-07-23 15:28:32.748384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:37.441 passed 00:34:37.441 Test: blockdev nvme passthru rw ...passed 00:34:37.441 Test: blockdev nvme passthru vendor specific ...passed 00:34:37.441 Test: blockdev nvme admin passthru ...passed 00:34:37.441 Test: blockdev copy ...passed 00:34:37.441 00:34:37.441 Run Summary: Type Total Ran Passed Failed Inactive 00:34:37.441 suites 2 2 n/a 0 0 00:34:37.441 tests 46 46 46 0 0 00:34:37.441 asserts 284 284 284 0 n/a 00:34:37.441 00:34:37.441 Elapsed time = 0.128 seconds 00:34:37.441 0 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 128645 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 128645 ']' 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 128645 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128645 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:37.441 killing process with pid 128645 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128645' 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 128645 00:34:37.441 15:28:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 128645 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:34:37.700 00:34:37.700 real 0m1.428s 00:34:37.700 user 0m3.651s 00:34:37.700 sys 0m0.361s 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:37.700 ************************************ 00:34:37.700 END TEST bdev_bounds 00:34:37.700 ************************************ 00:34:37.700 15:28:33 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:34:37.700 15:28:33 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:34:37.700 15:28:33 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:37.700 15:28:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:37.700 15:28:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:37.700 ************************************ 00:34:37.700 START TEST bdev_nbd 00:34:37.700 ************************************ 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=128688 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 128688 /var/tmp/spdk-nbd.sock 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 128688 ']' 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:37.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:37.700 15:28:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:37.959 [2024-07-23 15:28:33.135067] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:37.959 [2024-07-23 15:28:33.135210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.959 [2024-07-23 15:28:33.276348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.959 [2024-07-23 15:28:33.323265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:38.893 1+0 records in 00:34:38.893 1+0 records out 00:34:38.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676921 s, 6.1 MB/s 00:34:38.893 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:39.153 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:39.153 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:39.153 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:39.153 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:39.153 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:39.153 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:39.153 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:34:39.153 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:39.412 1+0 records in 00:34:39.412 1+0 records out 00:34:39.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502975 s, 8.1 MB/s 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:39.412 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:39.671 { 00:34:39.671 "nbd_device": "/dev/nbd0", 00:34:39.671 "bdev_name": "Nvme0n1p1" 00:34:39.671 }, 00:34:39.671 { 00:34:39.671 "nbd_device": "/dev/nbd1", 00:34:39.671 "bdev_name": "Nvme0n1p2" 00:34:39.671 } 00:34:39.671 ]' 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:39.671 { 00:34:39.671 "nbd_device": "/dev/nbd0", 00:34:39.671 "bdev_name": "Nvme0n1p1" 00:34:39.671 }, 00:34:39.671 { 00:34:39.671 "nbd_device": "/dev/nbd1", 00:34:39.671 "bdev_name": "Nvme0n1p2" 00:34:39.671 } 00:34:39.671 ]' 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:39.671 15:28:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:39.671 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:39.671 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:39.671 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:39.672 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:39.672 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:39.672 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:39.672 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:39.672 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:39.672 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:39.930 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:40.226 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:40.227 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:40.227 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:40.227 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:34:40.494 /dev/nbd0 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:40.494 1+0 records in 00:34:40.494 1+0 records out 00:34:40.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468453 s, 8.7 MB/s 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:40.494 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:34:40.753 /dev/nbd1 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:40.753 1+0 records in 00:34:40.753 1+0 records out 00:34:40.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585027 s, 7.0 MB/s 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:34:40.753 15:28:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.753 15:28:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:40.753 15:28:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:34:40.753 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:40.753 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:40.753 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:40.753 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:40.753 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:41.012 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:41.012 { 00:34:41.012 "nbd_device": "/dev/nbd0", 00:34:41.012 "bdev_name": "Nvme0n1p1" 00:34:41.012 }, 00:34:41.012 { 00:34:41.012 "nbd_device": "/dev/nbd1", 00:34:41.012 "bdev_name": "Nvme0n1p2" 00:34:41.012 } 00:34:41.012 ]' 00:34:41.012 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:41.012 { 00:34:41.012 "nbd_device": "/dev/nbd0", 00:34:41.012 "bdev_name": "Nvme0n1p1" 00:34:41.012 }, 00:34:41.012 { 00:34:41.013 "nbd_device": "/dev/nbd1", 00:34:41.013 "bdev_name": "Nvme0n1p2" 00:34:41.013 } 00:34:41.013 ]' 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:34:41.013 /dev/nbd1' 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:34:41.013 /dev/nbd1' 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:41.013 256+0 records in 00:34:41.013 256+0 records out 00:34:41.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00776488 s, 135 MB/s 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:41.013 256+0 records in 00:34:41.013 256+0 records out 00:34:41.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.083312 s, 12.6 MB/s 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:41.013 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:34:41.272 256+0 records in 00:34:41.272 256+0 records out 00:34:41.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0853873 s, 12.3 MB/s 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:41.272 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:41.531 15:28:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:41.790 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:42.049 malloc_lvol_verify 00:34:42.049 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:42.308 bac3b5f3-94c8-496b-ad6b-53d8ab566adc 00:34:42.308 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:42.569 21ef3370-fd68-45f3-9eaf-20e952b67217 00:34:42.569 15:28:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:42.828 /dev/nbd0 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:34:42.828 mke2fs 1.47.0 (5-Feb-2023) 00:34:42.828 00:34:42.828 Filesystem too small for a journal 00:34:42.828 Discarding device blocks: 0/1024 done 00:34:42.828 Creating filesystem with 1024 4k blocks and 1024 inodes 00:34:42.828 00:34:42.828 Allocating group tables: 0/1 done 00:34:42.828 Writing inode tables: 0/1 done 00:34:42.828 Writing superblocks and filesystem accounting information: 0/1 done 00:34:42.828 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:42.828 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 128688 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 128688 ']' 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 128688 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128688 00:34:43.088 killing process with pid 128688 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128688' 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 128688 00:34:43.088 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 128688 00:34:43.348 ************************************ 00:34:43.348 END TEST bdev_nbd 00:34:43.348 ************************************ 00:34:43.348 15:28:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:34:43.348 00:34:43.348 real 0m5.509s 00:34:43.348 user 0m7.856s 00:34:43.348 sys 0m1.949s 00:34:43.348 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:43.348 15:28:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:43.348 15:28:38 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:34:43.348 skipping fio tests on NVMe due to multi-ns failures. 00:34:43.348 15:28:38 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:34:43.348 15:28:38 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:34:43.348 15:28:38 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:34:43.348 15:28:38 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:34:43.348 15:28:38 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:43.348 15:28:38 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:43.348 15:28:38 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:43.348 15:28:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:43.348 15:28:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:43.348 ************************************ 00:34:43.348 START TEST bdev_verify 00:34:43.348 ************************************ 00:34:43.348 15:28:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:43.348 [2024-07-23 15:28:38.695069] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:43.348 [2024-07-23 15:28:38.695223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128920 ] 00:34:43.607 [2024-07-23 15:28:38.836170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:43.607 [2024-07-23 15:28:38.881432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.607 [2024-07-23 15:28:38.881535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.866 Running I/O for 5 seconds... 00:34:49.135 00:34:49.135 Latency(us) 00:34:49.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.135 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:49.135 Verification LBA range: start 0x0 length 0x4ff80 00:34:49.135 Nvme0n1p1 : 5.02 4817.61 18.82 0.00 0.00 26494.97 4119.41 24716.43 00:34:49.135 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:49.135 Verification LBA range: start 0x4ff80 length 0x4ff80 00:34:49.135 Nvme0n1p1 : 5.02 4818.33 18.82 0.00 0.00 26474.79 4181.82 28960.67 00:34:49.135 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:49.135 Verification LBA range: start 0x0 length 0x4ff7f 00:34:49.135 Nvme0n1p2 : 5.02 4816.02 18.81 0.00 0.00 26451.74 2683.86 22968.81 00:34:49.135 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:49.135 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:34:49.135 Nvme0n1p2 : 5.02 4817.07 18.82 0.00 0.00 26425.32 3464.05 22094.99 00:34:49.135 =================================================================================================================== 00:34:49.135 Total : 19269.02 75.27 0.00 0.00 26461.71 2683.86 28960.67 00:34:49.395 00:34:49.395 real 0m5.943s 00:34:49.395 user 0m11.204s 00:34:49.395 sys 0m0.216s 00:34:49.395 15:28:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:49.395 15:28:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:49.395 ************************************ 00:34:49.395 END TEST bdev_verify 00:34:49.395 ************************************ 00:34:49.395 15:28:44 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:34:49.395 15:28:44 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:49.395 15:28:44 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:34:49.395 15:28:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.395 15:28:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:49.395 ************************************ 00:34:49.395 START TEST bdev_verify_big_io 00:34:49.395 ************************************ 00:34:49.395 15:28:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:49.395 [2024-07-23 15:28:44.718054] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:49.395 [2024-07-23 15:28:44.718249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128992 ] 00:34:49.654 [2024-07-23 15:28:44.869559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:49.654 [2024-07-23 15:28:44.914714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.654 [2024-07-23 15:28:44.914781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.913 Running I/O for 5 seconds... 00:34:55.197 00:34:55.197 Latency(us) 00:34:55.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.197 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:55.197 Verification LBA range: start 0x0 length 0x4ff8 00:34:55.197 Nvme0n1p1 : 5.16 495.79 30.99 0.00 0.00 253782.45 36700.16 263641.97 00:34:55.197 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:55.197 Verification LBA range: start 0x4ff8 length 0x4ff8 00:34:55.197 Nvme0n1p1 : 5.18 469.53 29.35 0.00 0.00 267620.17 3885.35 279620.27 00:34:55.197 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:55.197 Verification LBA range: start 0x0 length 0x4ff7 00:34:55.197 Nvme0n1p2 : 5.17 503.61 31.48 0.00 0.00 243384.41 1256.11 205720.62 00:34:55.197 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:55.197 Verification LBA range: start 0x4ff7 length 0x4ff7 00:34:55.197 Nvme0n1p2 : 5.18 456.03 28.50 0.00 0.00 267587.40 2637.04 242670.45 00:34:55.197 =================================================================================================================== 00:34:55.197 Total : 1924.96 120.31 0.00 0.00 257717.84 1256.11 279620.27 00:34:55.764 00:34:55.764 real 0m6.272s 00:34:55.764 user 0m11.796s 00:34:55.764 sys 0m0.256s 00:34:55.764 15:28:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:55.764 15:28:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:55.764 ************************************ 00:34:55.764 END TEST bdev_verify_big_io 00:34:55.764 ************************************ 00:34:55.764 15:28:50 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:34:55.764 15:28:50 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:55.764 15:28:50 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:55.764 15:28:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:55.764 15:28:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:55.764 ************************************ 00:34:55.764 START TEST bdev_write_zeroes 00:34:55.764 ************************************ 00:34:55.764 15:28:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:55.764 [2024-07-23 15:28:51.035430] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:55.764 [2024-07-23 15:28:51.035578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129079 ] 00:34:55.764 [2024-07-23 15:28:51.176221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.022 [2024-07-23 15:28:51.222513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.022 Running I/O for 1 seconds... 00:34:57.396 00:34:57.396 Latency(us) 00:34:57.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.396 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:57.396 Nvme0n1p1 : 1.01 25940.43 101.33 0.00 0.00 4923.81 2995.93 13169.62 00:34:57.396 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:57.396 Nvme0n1p2 : 1.01 25895.28 101.15 0.00 0.00 4924.61 3229.99 16227.96 00:34:57.396 =================================================================================================================== 00:34:57.396 Total : 51835.71 202.48 0.00 0.00 4924.21 2995.93 16227.96 00:34:57.396 00:34:57.396 real 0m1.710s 00:34:57.396 user 0m1.422s 00:34:57.396 sys 0m0.188s 00:34:57.396 ************************************ 00:34:57.396 END TEST bdev_write_zeroes 00:34:57.396 ************************************ 00:34:57.396 15:28:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:57.396 15:28:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:57.396 15:28:52 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:34:57.396 15:28:52 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:57.396 15:28:52 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:57.396 15:28:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:57.396 15:28:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:57.396 ************************************ 00:34:57.396 START TEST bdev_json_nonenclosed 00:34:57.396 ************************************ 00:34:57.396 15:28:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:57.396 [2024-07-23 15:28:52.828067] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:57.396 [2024-07-23 15:28:52.828261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129111 ] 00:34:57.655 [2024-07-23 15:28:52.981068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.655 [2024-07-23 15:28:53.025055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.655 [2024-07-23 15:28:53.025164] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:57.655 [2024-07-23 15:28:53.025204] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:57.655 [2024-07-23 15:28:53.025230] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:57.914 00:34:57.914 real 0m0.390s 00:34:57.914 user 0m0.161s 00:34:57.914 sys 0m0.129s 00:34:57.914 15:28:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:34:57.914 15:28:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:57.914 ************************************ 00:34:57.914 END TEST bdev_json_nonenclosed 00:34:57.914 ************************************ 00:34:57.914 15:28:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:57.914 15:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:34:57.914 15:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # true 00:34:57.914 15:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:57.914 15:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:57.914 15:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:57.914 15:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:57.914 ************************************ 00:34:57.914 START TEST bdev_json_nonarray 00:34:57.914 ************************************ 00:34:57.914 15:28:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:57.914 [2024-07-23 15:28:53.281339] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:57.914 [2024-07-23 15:28:53.281541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129137 ] 00:34:58.172 [2024-07-23 15:28:53.436695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.172 [2024-07-23 15:28:53.484402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.172 [2024-07-23 15:28:53.484502] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:58.172 [2024-07-23 15:28:53.484536] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:58.172 [2024-07-23 15:28:53.484550] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:58.172 00:34:58.172 real 0m0.400s 00:34:58.172 user 0m0.160s 00:34:58.172 sys 0m0.139s 00:34:58.172 ************************************ 00:34:58.172 END TEST bdev_json_nonarray 00:34:58.172 ************************************ 00:34:58.172 15:28:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:34:58.172 15:28:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:58.172 15:28:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:58.430 15:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:34:58.430 15:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # true 00:34:58.430 15:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:34:58.430 15:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:34:58.430 15:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:34:58.430 15:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:58.430 15:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.430 15:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:34:58.430 ************************************ 00:34:58.430 START TEST bdev_gpt_uuid 00:34:58.430 ************************************ 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=129161 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 129161 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 129161 ']' 00:34:58.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:58.430 15:28:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:34:58.430 [2024-07-23 15:28:53.728428] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:34:58.430 [2024-07-23 15:28:53.728566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129161 ] 00:34:58.688 [2024-07-23 15:28:53.868713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.688 [2024-07-23 15:28:53.916555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:34:58.947 Some configs were skipped because the RPC state that can call them passed over. 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:34:58.947 { 00:34:58.947 "name": "Nvme0n1p1", 00:34:58.947 "aliases": [ 00:34:58.947 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:34:58.947 ], 00:34:58.947 "product_name": "GPT Disk", 00:34:58.947 "block_size": 4096, 00:34:58.947 "num_blocks": 655104, 00:34:58.947 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:34:58.947 "assigned_rate_limits": { 00:34:58.947 "rw_ios_per_sec": 0, 00:34:58.947 "rw_mbytes_per_sec": 0, 00:34:58.947 "r_mbytes_per_sec": 0, 00:34:58.947 "w_mbytes_per_sec": 0 00:34:58.947 }, 00:34:58.947 "claimed": false, 00:34:58.947 "zoned": false, 00:34:58.947 "supported_io_types": { 00:34:58.947 "read": true, 00:34:58.947 "write": true, 00:34:58.947 "unmap": true, 00:34:58.947 "flush": true, 00:34:58.947 "reset": true, 00:34:58.947 "nvme_admin": false, 00:34:58.947 "nvme_io": false, 00:34:58.947 "nvme_io_md": false, 00:34:58.947 "write_zeroes": true, 00:34:58.947 "zcopy": false, 00:34:58.947 "get_zone_info": false, 00:34:58.947 "zone_management": false, 00:34:58.947 "zone_append": false, 00:34:58.947 "compare": true, 00:34:58.947 "compare_and_write": false, 00:34:58.947 "abort": true, 00:34:58.947 "seek_hole": false, 00:34:58.947 "seek_data": false, 00:34:58.947 "copy": true, 00:34:58.947 "nvme_iov_md": false 00:34:58.947 }, 00:34:58.947 "driver_specific": { 00:34:58.947 "gpt": { 00:34:58.947 "base_bdev": "Nvme0n1", 00:34:58.947 "offset_blocks": 256, 00:34:58.947 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:34:58.947 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:34:58.947 "partition_name": "SPDK_TEST_first" 00:34:58.947 } 00:34:58.947 } 00:34:58.947 } 00:34:58.947 ]' 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:34:58.947 { 00:34:58.947 "name": "Nvme0n1p2", 00:34:58.947 "aliases": [ 00:34:58.947 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:34:58.947 ], 00:34:58.947 "product_name": "GPT Disk", 00:34:58.947 "block_size": 4096, 00:34:58.947 "num_blocks": 655103, 00:34:58.947 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:34:58.947 "assigned_rate_limits": { 00:34:58.947 "rw_ios_per_sec": 0, 00:34:58.947 "rw_mbytes_per_sec": 0, 00:34:58.947 "r_mbytes_per_sec": 0, 00:34:58.947 "w_mbytes_per_sec": 0 00:34:58.947 }, 00:34:58.947 "claimed": false, 00:34:58.947 "zoned": false, 00:34:58.947 "supported_io_types": { 00:34:58.947 "read": true, 00:34:58.947 "write": true, 00:34:58.947 "unmap": true, 00:34:58.947 "flush": true, 00:34:58.947 "reset": true, 00:34:58.947 "nvme_admin": false, 00:34:58.947 "nvme_io": false, 00:34:58.947 "nvme_io_md": false, 00:34:58.947 "write_zeroes": true, 00:34:58.947 "zcopy": false, 00:34:58.947 "get_zone_info": false, 00:34:58.947 "zone_management": false, 00:34:58.947 "zone_append": false, 00:34:58.947 "compare": true, 00:34:58.947 "compare_and_write": false, 00:34:58.947 "abort": true, 00:34:58.947 "seek_hole": false, 00:34:58.947 "seek_data": false, 00:34:58.947 "copy": true, 00:34:58.947 "nvme_iov_md": false 00:34:58.947 }, 00:34:58.947 "driver_specific": { 00:34:58.947 "gpt": { 00:34:58.947 "base_bdev": "Nvme0n1", 00:34:58.947 "offset_blocks": 655360, 00:34:58.947 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:34:58.947 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:34:58.947 "partition_name": "SPDK_TEST_second" 00:34:58.947 } 00:34:58.947 } 00:34:58.947 } 00:34:58.947 ]' 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:34:58.947 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:34:59.205 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:34:59.205 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:34:59.205 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:34:59.205 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 129161 00:34:59.205 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 129161 ']' 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 129161 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 129161 00:34:59.206 killing process with pid 129161 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 129161' 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 129161 00:34:59.206 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 129161 00:34:59.464 00:34:59.464 real 0m1.152s 00:34:59.464 user 0m1.056s 00:34:59.464 sys 0m0.406s 00:34:59.464 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:59.464 15:28:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:34:59.464 ************************************ 00:34:59.464 END TEST bdev_gpt_uuid 00:34:59.464 ************************************ 00:34:59.464 15:28:54 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:34:59.464 15:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:00.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:35:00.031 Waiting for block devices as requested 00:35:00.031 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:00.031 15:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:35:00.031 15:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:35:00.290 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:35:00.290 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:35:00.290 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:35:00.290 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:35:00.290 15:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:35:00.290 00:35:00.290 real 0m30.664s 00:35:00.290 user 0m44.089s 00:35:00.290 sys 0m6.863s 00:35:00.290 15:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:00.290 15:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:00.290 ************************************ 00:35:00.290 END TEST blockdev_nvme_gpt 00:35:00.290 ************************************ 00:35:00.548 15:28:55 -- common/autotest_common.sh@1142 -- # return 0 00:35:00.548 15:28:55 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:35:00.548 15:28:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:00.548 15:28:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:00.548 15:28:55 -- common/autotest_common.sh@10 -- # set +x 00:35:00.548 ************************************ 00:35:00.548 START TEST nvme 00:35:00.548 ************************************ 00:35:00.548 15:28:55 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:35:00.548 * Looking for test storage... 00:35:00.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:00.548 15:28:55 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:01.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:35:01.115 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:02.052 15:28:57 nvme -- nvme/nvme.sh@79 -- # uname 00:35:02.052 15:28:57 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:35:02.052 15:28:57 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:35:02.052 15:28:57 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1069 -- # stubpid=129510 00:35:02.052 Waiting for stub to ready for secondary processes... 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/129510 ]] 00:35:02.052 15:28:57 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:35:02.052 [2024-07-23 15:28:57.308216] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:35:02.052 [2024-07-23 15:28:57.308441] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:35:03.013 15:28:58 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:35:03.013 15:28:58 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/129510 ]] 00:35:03.013 15:28:58 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:35:03.013 [2024-07-23 15:28:58.333294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:03.013 [2024-07-23 15:28:58.363379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:03.013 [2024-07-23 15:28:58.363453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.013 [2024-07-23 15:28:58.363568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:03.013 [2024-07-23 15:28:58.370192] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:35:03.013 [2024-07-23 15:28:58.370237] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:35:03.013 [2024-07-23 15:28:58.383282] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:35:03.013 [2024-07-23 15:28:58.383507] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:35:03.951 done. 00:35:03.951 15:28:59 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:35:03.951 15:28:59 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:35:03.951 15:28:59 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:35:03.951 15:28:59 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:35:03.951 15:28:59 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:03.951 15:28:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:03.951 ************************************ 00:35:03.951 START TEST nvme_reset 00:35:03.951 ************************************ 00:35:03.951 15:28:59 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:35:04.210 Initializing NVMe Controllers 00:35:04.210 Skipping QEMU NVMe SSD at 0000:00:10.0 00:35:04.210 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:35:04.210 ************************************ 00:35:04.210 END TEST nvme_reset 00:35:04.210 ************************************ 00:35:04.210 00:35:04.210 real 0m0.282s 00:35:04.210 user 0m0.084s 00:35:04.210 sys 0m0.153s 00:35:04.210 15:28:59 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:04.210 15:28:59 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:35:04.210 15:28:59 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:04.210 15:28:59 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:35:04.210 15:28:59 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:04.210 15:28:59 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:04.210 15:28:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:04.210 ************************************ 00:35:04.210 START TEST nvme_identify 00:35:04.210 ************************************ 00:35:04.210 15:28:59 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:35:04.210 15:28:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:35:04.210 15:28:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:35:04.210 15:28:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:35:04.210 15:28:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:35:04.210 15:28:59 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:04.210 15:28:59 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:35:04.210 15:28:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:04.210 15:28:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:04.210 15:28:59 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:04.469 15:28:59 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:04.469 15:28:59 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:35:04.469 15:28:59 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:35:04.469 [2024-07-23 15:28:59.878127] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 129545 terminated unexpected 00:35:04.469 ===================================================== 00:35:04.469 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:04.469 ===================================================== 00:35:04.469 Controller Capabilities/Features 00:35:04.469 ================================ 00:35:04.469 Vendor ID: 1b36 00:35:04.469 Subsystem Vendor ID: 1af4 00:35:04.469 Serial Number: 12340 00:35:04.469 Model Number: QEMU NVMe Ctrl 00:35:04.469 Firmware Version: 8.0.0 00:35:04.469 Recommended Arb Burst: 6 00:35:04.469 IEEE OUI Identifier: 00 54 52 00:35:04.469 Multi-path I/O 00:35:04.469 May have multiple subsystem ports: No 00:35:04.469 May have multiple controllers: No 00:35:04.469 Associated with SR-IOV VF: No 00:35:04.469 Max Data Transfer Size: 524288 00:35:04.469 Max Number of Namespaces: 256 00:35:04.469 Max Number of I/O Queues: 64 00:35:04.469 NVMe Specification Version (VS): 1.4 00:35:04.469 NVMe Specification Version (Identify): 1.4 00:35:04.469 Maximum Queue Entries: 2048 00:35:04.469 Contiguous Queues Required: Yes 00:35:04.469 Arbitration Mechanisms Supported 00:35:04.469 Weighted Round Robin: Not Supported 00:35:04.469 Vendor Specific: Not Supported 00:35:04.469 Reset Timeout: 7500 ms 00:35:04.469 Doorbell Stride: 4 bytes 00:35:04.469 NVM Subsystem Reset: Not Supported 00:35:04.469 Command Sets Supported 00:35:04.469 NVM Command Set: Supported 00:35:04.469 Boot Partition: Not Supported 00:35:04.469 Memory Page Size Minimum: 4096 bytes 00:35:04.469 Memory Page Size Maximum: 65536 bytes 00:35:04.469 Persistent Memory Region: Not Supported 00:35:04.469 Optional Asynchronous Events Supported 00:35:04.469 Namespace Attribute Notices: Supported 00:35:04.469 Firmware Activation Notices: Not Supported 00:35:04.469 ANA Change Notices: Not Supported 00:35:04.469 PLE Aggregate Log Change Notices: Not Supported 00:35:04.469 LBA Status Info Alert Notices: Not Supported 00:35:04.469 EGE Aggregate Log Change Notices: Not Supported 00:35:04.469 Normal NVM Subsystem Shutdown event: Not Supported 00:35:04.469 Zone Descriptor Change Notices: Not Supported 00:35:04.469 Discovery Log Change Notices: Not Supported 00:35:04.469 Controller Attributes 00:35:04.469 128-bit Host Identifier: Not Supported 00:35:04.469 Non-Operational Permissive Mode: Not Supported 00:35:04.469 NVM Sets: Not Supported 00:35:04.469 Read Recovery Levels: Not Supported 00:35:04.469 Endurance Groups: Not Supported 00:35:04.469 Predictable Latency Mode: Not Supported 00:35:04.469 Traffic Based Keep ALive: Not Supported 00:35:04.469 Namespace Granularity: Not Supported 00:35:04.469 SQ Associations: Not Supported 00:35:04.469 UUID List: Not Supported 00:35:04.469 Multi-Domain Subsystem: Not Supported 00:35:04.469 Fixed Capacity Management: Not Supported 00:35:04.469 Variable Capacity Management: Not Supported 00:35:04.469 Delete Endurance Group: Not Supported 00:35:04.469 Delete NVM Set: Not Supported 00:35:04.469 Extended LBA Formats Supported: Supported 00:35:04.469 Flexible Data Placement Supported: Not Supported 00:35:04.469 00:35:04.469 Controller Memory Buffer Support 00:35:04.469 ================================ 00:35:04.469 Supported: No 00:35:04.469 00:35:04.469 Persistent Memory Region Support 00:35:04.469 ================================ 00:35:04.469 Supported: No 00:35:04.469 00:35:04.469 Admin Command Set Attributes 00:35:04.469 ============================ 00:35:04.469 Security Send/Receive: Not Supported 00:35:04.469 Format NVM: Supported 00:35:04.469 Firmware Activate/Download: Not Supported 00:35:04.469 Namespace Management: Supported 00:35:04.469 Device Self-Test: Not Supported 00:35:04.469 Directives: Supported 00:35:04.469 NVMe-MI: Not Supported 00:35:04.469 Virtualization Management: Not Supported 00:35:04.469 Doorbell Buffer Config: Supported 00:35:04.469 Get LBA Status Capability: Not Supported 00:35:04.469 Command & Feature Lockdown Capability: Not Supported 00:35:04.469 Abort Command Limit: 4 00:35:04.469 Async Event Request Limit: 4 00:35:04.469 Number of Firmware Slots: N/A 00:35:04.469 Firmware Slot 1 Read-Only: N/A 00:35:04.469 Firmware Activation Without Reset: N/A 00:35:04.469 Multiple Update Detection Support: N/A 00:35:04.469 Firmware Update Granularity: No Information Provided 00:35:04.469 Per-Namespace SMART Log: Yes 00:35:04.469 Asymmetric Namespace Access Log Page: Not Supported 00:35:04.469 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:35:04.469 Command Effects Log Page: Supported 00:35:04.469 Get Log Page Extended Data: Supported 00:35:04.469 Telemetry Log Pages: Not Supported 00:35:04.469 Persistent Event Log Pages: Not Supported 00:35:04.469 Supported Log Pages Log Page: May Support 00:35:04.469 Commands Supported & Effects Log Page: Not Supported 00:35:04.469 Feature Identifiers & Effects Log Page:May Support 00:35:04.469 NVMe-MI Commands & Effects Log Page: May Support 00:35:04.470 Data Area 4 for Telemetry Log: Not Supported 00:35:04.470 Error Log Page Entries Supported: 1 00:35:04.470 Keep Alive: Not Supported 00:35:04.470 00:35:04.470 NVM Command Set Attributes 00:35:04.470 ========================== 00:35:04.470 Submission Queue Entry Size 00:35:04.470 Max: 64 00:35:04.470 Min: 64 00:35:04.470 Completion Queue Entry Size 00:35:04.470 Max: 16 00:35:04.470 Min: 16 00:35:04.470 Number of Namespaces: 256 00:35:04.470 Compare Command: Supported 00:35:04.470 Write Uncorrectable Command: Not Supported 00:35:04.470 Dataset Management Command: Supported 00:35:04.470 Write Zeroes Command: Supported 00:35:04.470 Set Features Save Field: Supported 00:35:04.470 Reservations: Not Supported 00:35:04.470 Timestamp: Supported 00:35:04.470 Copy: Supported 00:35:04.470 Volatile Write Cache: Present 00:35:04.470 Atomic Write Unit (Normal): 1 00:35:04.470 Atomic Write Unit (PFail): 1 00:35:04.470 Atomic Compare & Write Unit: 1 00:35:04.470 Fused Compare & Write: Not Supported 00:35:04.470 Scatter-Gather List 00:35:04.470 SGL Command Set: Supported 00:35:04.470 SGL Keyed: Not Supported 00:35:04.470 SGL Bit Bucket Descriptor: Not Supported 00:35:04.470 SGL Metadata Pointer: Not Supported 00:35:04.470 Oversized SGL: Not Supported 00:35:04.470 SGL Metadata Address: Not Supported 00:35:04.470 SGL Offset: Not Supported 00:35:04.470 Transport SGL Data Block: Not Supported 00:35:04.470 Replay Protected Memory Block: Not Supported 00:35:04.470 00:35:04.470 Firmware Slot Information 00:35:04.470 ========================= 00:35:04.470 Active slot: 1 00:35:04.470 Slot 1 Firmware Revision: 1.0 00:35:04.470 00:35:04.470 00:35:04.470 Commands Supported and Effects 00:35:04.470 ============================== 00:35:04.470 Admin Commands 00:35:04.470 -------------- 00:35:04.470 Delete I/O Submission Queue (00h): Supported 00:35:04.470 Create I/O Submission Queue (01h): Supported 00:35:04.470 Get Log Page (02h): Supported 00:35:04.470 Delete I/O Completion Queue (04h): Supported 00:35:04.470 Create I/O Completion Queue (05h): Supported 00:35:04.470 Identify (06h): Supported 00:35:04.470 Abort (08h): Supported 00:35:04.470 Set Features (09h): Supported 00:35:04.470 Get Features (0Ah): Supported 00:35:04.470 Asynchronous Event Request (0Ch): Supported 00:35:04.470 Namespace Attachment (15h): Supported NS-Inventory-Change 00:35:04.470 Directive Send (19h): Supported 00:35:04.470 Directive Receive (1Ah): Supported 00:35:04.470 Virtualization Management (1Ch): Supported 00:35:04.470 Doorbell Buffer Config (7Ch): Supported 00:35:04.470 Format NVM (80h): Supported LBA-Change 00:35:04.470 I/O Commands 00:35:04.470 ------------ 00:35:04.470 Flush (00h): Supported LBA-Change 00:35:04.470 Write (01h): Supported LBA-Change 00:35:04.470 Read (02h): Supported 00:35:04.470 Compare (05h): Supported 00:35:04.470 Write Zeroes (08h): Supported LBA-Change 00:35:04.470 Dataset Management (09h): Supported LBA-Change 00:35:04.470 Unknown (0Ch): Supported 00:35:04.470 Unknown (12h): Supported 00:35:04.470 Copy (19h): Supported LBA-Change 00:35:04.470 Unknown (1Dh): Supported LBA-Change 00:35:04.470 00:35:04.470 Error Log 00:35:04.470 ========= 00:35:04.470 00:35:04.470 Arbitration 00:35:04.470 =========== 00:35:04.470 Arbitration Burst: no limit 00:35:04.470 00:35:04.470 Power Management 00:35:04.470 ================ 00:35:04.470 Number of Power States: 1 00:35:04.470 Current Power State: Power State #0 00:35:04.470 Power State #0: 00:35:04.470 Max Power: 25.00 W 00:35:04.470 Non-Operational State: Operational 00:35:04.470 Entry Latency: 16 microseconds 00:35:04.470 Exit Latency: 4 microseconds 00:35:04.470 Relative Read Throughput: 0 00:35:04.470 Relative Read Latency: 0 00:35:04.470 Relative Write Throughput: 0 00:35:04.470 Relative Write Latency: 0 00:35:04.728 Idle Power: Not Reported 00:35:04.728 Active Power: Not Reported 00:35:04.728 Non-Operational Permissive Mode: Not Supported 00:35:04.728 00:35:04.728 Health Information 00:35:04.728 ================== 00:35:04.728 Critical Warnings: 00:35:04.728 Available Spare Space: OK 00:35:04.728 Temperature: OK 00:35:04.728 Device Reliability: OK 00:35:04.728 Read Only: No 00:35:04.728 Volatile Memory Backup: OK 00:35:04.728 Current Temperature: 323 Kelvin (50 Celsius) 00:35:04.728 Temperature Threshold: 343 Kelvin (70 Celsius) 00:35:04.728 Available Spare: 0% 00:35:04.728 Available Spare Threshold: 0% 00:35:04.728 Life Percentage Used: 0% 00:35:04.728 Data Units Read: 4694 00:35:04.728 Data Units Written: 4342 00:35:04.728 Host Read Commands: 226023 00:35:04.728 Host Write Commands: 238787 00:35:04.728 Controller Busy Time: 0 minutes 00:35:04.728 Power Cycles: 0 00:35:04.728 Power On Hours: 0 hours 00:35:04.728 Unsafe Shutdowns: 0 00:35:04.728 Unrecoverable Media Errors: 0 00:35:04.728 Lifetime Error Log Entries: 0 00:35:04.728 Warning Temperature Time: 0 minutes 00:35:04.728 Critical Temperature Time: 0 minutes 00:35:04.728 00:35:04.728 Number of Queues 00:35:04.728 ================ 00:35:04.728 Number of I/O Submission Queues: 64 00:35:04.728 Number of I/O Completion Queues: 64 00:35:04.728 00:35:04.728 ZNS Specific Controller Data 00:35:04.728 ============================ 00:35:04.728 Zone Append Size Limit: 0 00:35:04.729 00:35:04.729 00:35:04.729 Active Namespaces 00:35:04.729 ================= 00:35:04.729 Namespace ID:1 00:35:04.729 Error Recovery Timeout: Unlimited 00:35:04.729 Command Set Identifier: NVM (00h) 00:35:04.729 Deallocate: Supported 00:35:04.729 Deallocated/Unwritten Error: Supported 00:35:04.729 Deallocated Read Value: All 0x00 00:35:04.729 Deallocate in Write Zeroes: Not Supported 00:35:04.729 Deallocated Guard Field: 0xFFFF 00:35:04.729 Flush: Supported 00:35:04.729 Reservation: Not Supported 00:35:04.729 Namespace Sharing Capabilities: Private 00:35:04.729 Size (in LBAs): 1310720 (5GiB) 00:35:04.729 Capacity (in LBAs): 1310720 (5GiB) 00:35:04.729 Utilization (in LBAs): 1310720 (5GiB) 00:35:04.729 Thin Provisioning: Not Supported 00:35:04.729 Per-NS Atomic Units: No 00:35:04.729 Maximum Single Source Range Length: 128 00:35:04.729 Maximum Copy Length: 128 00:35:04.729 Maximum Source Range Count: 128 00:35:04.729 NGUID/EUI64 Never Reused: No 00:35:04.729 Namespace Write Protected: No 00:35:04.729 Number of LBA Formats: 8 00:35:04.729 Current LBA Format: LBA Format #04 00:35:04.729 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:04.729 LBA Format #01: Data Size: 512 Metadata Size: 8 00:35:04.729 LBA Format #02: Data Size: 512 Metadata Size: 16 00:35:04.729 LBA Format #03: Data Size: 512 Metadata Size: 64 00:35:04.729 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:35:04.729 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:35:04.729 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:35:04.729 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:35:04.729 00:35:04.729 NVM Specific Namespace Data 00:35:04.729 =========================== 00:35:04.729 Logical Block Storage Tag Mask: 0 00:35:04.729 Protection Information Capabilities: 00:35:04.729 16b Guard Protection Information Storage Tag Support: No 00:35:04.729 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:35:04.729 Storage Tag Check Read Support: No 00:35:04.729 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.729 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.729 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.729 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.729 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.729 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.729 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.729 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.729 15:28:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:35:04.729 15:28:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:35:04.988 ===================================================== 00:35:04.988 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:04.988 ===================================================== 00:35:04.988 Controller Capabilities/Features 00:35:04.988 ================================ 00:35:04.988 Vendor ID: 1b36 00:35:04.988 Subsystem Vendor ID: 1af4 00:35:04.988 Serial Number: 12340 00:35:04.988 Model Number: QEMU NVMe Ctrl 00:35:04.988 Firmware Version: 8.0.0 00:35:04.988 Recommended Arb Burst: 6 00:35:04.988 IEEE OUI Identifier: 00 54 52 00:35:04.988 Multi-path I/O 00:35:04.988 May have multiple subsystem ports: No 00:35:04.988 May have multiple controllers: No 00:35:04.988 Associated with SR-IOV VF: No 00:35:04.988 Max Data Transfer Size: 524288 00:35:04.988 Max Number of Namespaces: 256 00:35:04.988 Max Number of I/O Queues: 64 00:35:04.988 NVMe Specification Version (VS): 1.4 00:35:04.988 NVMe Specification Version (Identify): 1.4 00:35:04.988 Maximum Queue Entries: 2048 00:35:04.988 Contiguous Queues Required: Yes 00:35:04.988 Arbitration Mechanisms Supported 00:35:04.988 Weighted Round Robin: Not Supported 00:35:04.989 Vendor Specific: Not Supported 00:35:04.989 Reset Timeout: 7500 ms 00:35:04.989 Doorbell Stride: 4 bytes 00:35:04.989 NVM Subsystem Reset: Not Supported 00:35:04.989 Command Sets Supported 00:35:04.989 NVM Command Set: Supported 00:35:04.989 Boot Partition: Not Supported 00:35:04.989 Memory Page Size Minimum: 4096 bytes 00:35:04.989 Memory Page Size Maximum: 65536 bytes 00:35:04.989 Persistent Memory Region: Not Supported 00:35:04.989 Optional Asynchronous Events Supported 00:35:04.989 Namespace Attribute Notices: Supported 00:35:04.989 Firmware Activation Notices: Not Supported 00:35:04.989 ANA Change Notices: Not Supported 00:35:04.989 PLE Aggregate Log Change Notices: Not Supported 00:35:04.989 LBA Status Info Alert Notices: Not Supported 00:35:04.989 EGE Aggregate Log Change Notices: Not Supported 00:35:04.989 Normal NVM Subsystem Shutdown event: Not Supported 00:35:04.989 Zone Descriptor Change Notices: Not Supported 00:35:04.989 Discovery Log Change Notices: Not Supported 00:35:04.989 Controller Attributes 00:35:04.989 128-bit Host Identifier: Not Supported 00:35:04.989 Non-Operational Permissive Mode: Not Supported 00:35:04.989 NVM Sets: Not Supported 00:35:04.989 Read Recovery Levels: Not Supported 00:35:04.989 Endurance Groups: Not Supported 00:35:04.989 Predictable Latency Mode: Not Supported 00:35:04.989 Traffic Based Keep ALive: Not Supported 00:35:04.989 Namespace Granularity: Not Supported 00:35:04.989 SQ Associations: Not Supported 00:35:04.989 UUID List: Not Supported 00:35:04.989 Multi-Domain Subsystem: Not Supported 00:35:04.989 Fixed Capacity Management: Not Supported 00:35:04.989 Variable Capacity Management: Not Supported 00:35:04.989 Delete Endurance Group: Not Supported 00:35:04.989 Delete NVM Set: Not Supported 00:35:04.989 Extended LBA Formats Supported: Supported 00:35:04.989 Flexible Data Placement Supported: Not Supported 00:35:04.989 00:35:04.989 Controller Memory Buffer Support 00:35:04.989 ================================ 00:35:04.989 Supported: No 00:35:04.989 00:35:04.989 Persistent Memory Region Support 00:35:04.989 ================================ 00:35:04.989 Supported: No 00:35:04.989 00:35:04.989 Admin Command Set Attributes 00:35:04.989 ============================ 00:35:04.989 Security Send/Receive: Not Supported 00:35:04.989 Format NVM: Supported 00:35:04.989 Firmware Activate/Download: Not Supported 00:35:04.989 Namespace Management: Supported 00:35:04.989 Device Self-Test: Not Supported 00:35:04.989 Directives: Supported 00:35:04.989 NVMe-MI: Not Supported 00:35:04.989 Virtualization Management: Not Supported 00:35:04.989 Doorbell Buffer Config: Supported 00:35:04.989 Get LBA Status Capability: Not Supported 00:35:04.989 Command & Feature Lockdown Capability: Not Supported 00:35:04.989 Abort Command Limit: 4 00:35:04.989 Async Event Request Limit: 4 00:35:04.989 Number of Firmware Slots: N/A 00:35:04.989 Firmware Slot 1 Read-Only: N/A 00:35:04.989 Firmware Activation Without Reset: N/A 00:35:04.989 Multiple Update Detection Support: N/A 00:35:04.989 Firmware Update Granularity: No Information Provided 00:35:04.989 Per-Namespace SMART Log: Yes 00:35:04.989 Asymmetric Namespace Access Log Page: Not Supported 00:35:04.989 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:35:04.989 Command Effects Log Page: Supported 00:35:04.989 Get Log Page Extended Data: Supported 00:35:04.989 Telemetry Log Pages: Not Supported 00:35:04.989 Persistent Event Log Pages: Not Supported 00:35:04.989 Supported Log Pages Log Page: May Support 00:35:04.989 Commands Supported & Effects Log Page: Not Supported 00:35:04.989 Feature Identifiers & Effects Log Page:May Support 00:35:04.989 NVMe-MI Commands & Effects Log Page: May Support 00:35:04.989 Data Area 4 for Telemetry Log: Not Supported 00:35:04.989 Error Log Page Entries Supported: 1 00:35:04.989 Keep Alive: Not Supported 00:35:04.989 00:35:04.989 NVM Command Set Attributes 00:35:04.989 ========================== 00:35:04.989 Submission Queue Entry Size 00:35:04.989 Max: 64 00:35:04.989 Min: 64 00:35:04.989 Completion Queue Entry Size 00:35:04.989 Max: 16 00:35:04.989 Min: 16 00:35:04.989 Number of Namespaces: 256 00:35:04.989 Compare Command: Supported 00:35:04.989 Write Uncorrectable Command: Not Supported 00:35:04.989 Dataset Management Command: Supported 00:35:04.989 Write Zeroes Command: Supported 00:35:04.989 Set Features Save Field: Supported 00:35:04.989 Reservations: Not Supported 00:35:04.989 Timestamp: Supported 00:35:04.989 Copy: Supported 00:35:04.989 Volatile Write Cache: Present 00:35:04.989 Atomic Write Unit (Normal): 1 00:35:04.989 Atomic Write Unit (PFail): 1 00:35:04.989 Atomic Compare & Write Unit: 1 00:35:04.989 Fused Compare & Write: Not Supported 00:35:04.989 Scatter-Gather List 00:35:04.989 SGL Command Set: Supported 00:35:04.989 SGL Keyed: Not Supported 00:35:04.989 SGL Bit Bucket Descriptor: Not Supported 00:35:04.989 SGL Metadata Pointer: Not Supported 00:35:04.989 Oversized SGL: Not Supported 00:35:04.989 SGL Metadata Address: Not Supported 00:35:04.989 SGL Offset: Not Supported 00:35:04.989 Transport SGL Data Block: Not Supported 00:35:04.989 Replay Protected Memory Block: Not Supported 00:35:04.989 00:35:04.989 Firmware Slot Information 00:35:04.989 ========================= 00:35:04.989 Active slot: 1 00:35:04.989 Slot 1 Firmware Revision: 1.0 00:35:04.989 00:35:04.989 00:35:04.989 Commands Supported and Effects 00:35:04.989 ============================== 00:35:04.989 Admin Commands 00:35:04.989 -------------- 00:35:04.989 Delete I/O Submission Queue (00h): Supported 00:35:04.989 Create I/O Submission Queue (01h): Supported 00:35:04.989 Get Log Page (02h): Supported 00:35:04.989 Delete I/O Completion Queue (04h): Supported 00:35:04.989 Create I/O Completion Queue (05h): Supported 00:35:04.989 Identify (06h): Supported 00:35:04.989 Abort (08h): Supported 00:35:04.989 Set Features (09h): Supported 00:35:04.989 Get Features (0Ah): Supported 00:35:04.989 Asynchronous Event Request (0Ch): Supported 00:35:04.989 Namespace Attachment (15h): Supported NS-Inventory-Change 00:35:04.989 Directive Send (19h): Supported 00:35:04.989 Directive Receive (1Ah): Supported 00:35:04.989 Virtualization Management (1Ch): Supported 00:35:04.989 Doorbell Buffer Config (7Ch): Supported 00:35:04.989 Format NVM (80h): Supported LBA-Change 00:35:04.989 I/O Commands 00:35:04.989 ------------ 00:35:04.989 Flush (00h): Supported LBA-Change 00:35:04.989 Write (01h): Supported LBA-Change 00:35:04.989 Read (02h): Supported 00:35:04.989 Compare (05h): Supported 00:35:04.989 Write Zeroes (08h): Supported LBA-Change 00:35:04.989 Dataset Management (09h): Supported LBA-Change 00:35:04.989 Unknown (0Ch): Supported 00:35:04.989 Unknown (12h): Supported 00:35:04.989 Copy (19h): Supported LBA-Change 00:35:04.989 Unknown (1Dh): Supported LBA-Change 00:35:04.989 00:35:04.989 Error Log 00:35:04.989 ========= 00:35:04.989 00:35:04.989 Arbitration 00:35:04.989 =========== 00:35:04.989 Arbitration Burst: no limit 00:35:04.989 00:35:04.989 Power Management 00:35:04.989 ================ 00:35:04.989 Number of Power States: 1 00:35:04.989 Current Power State: Power State #0 00:35:04.989 Power State #0: 00:35:04.989 Max Power: 25.00 W 00:35:04.989 Non-Operational State: Operational 00:35:04.989 Entry Latency: 16 microseconds 00:35:04.989 Exit Latency: 4 microseconds 00:35:04.989 Relative Read Throughput: 0 00:35:04.989 Relative Read Latency: 0 00:35:04.989 Relative Write Throughput: 0 00:35:04.989 Relative Write Latency: 0 00:35:04.989 Idle Power: Not Reported 00:35:04.989 Active Power: Not Reported 00:35:04.989 Non-Operational Permissive Mode: Not Supported 00:35:04.989 00:35:04.989 Health Information 00:35:04.989 ================== 00:35:04.989 Critical Warnings: 00:35:04.989 Available Spare Space: OK 00:35:04.989 Temperature: OK 00:35:04.989 Device Reliability: OK 00:35:04.989 Read Only: No 00:35:04.989 Volatile Memory Backup: OK 00:35:04.989 Current Temperature: 323 Kelvin (50 Celsius) 00:35:04.989 Temperature Threshold: 343 Kelvin (70 Celsius) 00:35:04.989 Available Spare: 0% 00:35:04.989 Available Spare Threshold: 0% 00:35:04.989 Life Percentage Used: 0% 00:35:04.989 Data Units Read: 4694 00:35:04.990 Data Units Written: 4342 00:35:04.990 Host Read Commands: 226023 00:35:04.990 Host Write Commands: 238787 00:35:04.990 Controller Busy Time: 0 minutes 00:35:04.990 Power Cycles: 0 00:35:04.990 Power On Hours: 0 hours 00:35:04.990 Unsafe Shutdowns: 0 00:35:04.990 Unrecoverable Media Errors: 0 00:35:04.990 Lifetime Error Log Entries: 0 00:35:04.990 Warning Temperature Time: 0 minutes 00:35:04.990 Critical Temperature Time: 0 minutes 00:35:04.990 00:35:04.990 Number of Queues 00:35:04.990 ================ 00:35:04.990 Number of I/O Submission Queues: 64 00:35:04.990 Number of I/O Completion Queues: 64 00:35:04.990 00:35:04.990 ZNS Specific Controller Data 00:35:04.990 ============================ 00:35:04.990 Zone Append Size Limit: 0 00:35:04.990 00:35:04.990 00:35:04.990 Active Namespaces 00:35:04.990 ================= 00:35:04.990 Namespace ID:1 00:35:04.990 Error Recovery Timeout: Unlimited 00:35:04.990 Command Set Identifier: NVM (00h) 00:35:04.990 Deallocate: Supported 00:35:04.990 Deallocated/Unwritten Error: Supported 00:35:04.990 Deallocated Read Value: All 0x00 00:35:04.990 Deallocate in Write Zeroes: Not Supported 00:35:04.990 Deallocated Guard Field: 0xFFFF 00:35:04.990 Flush: Supported 00:35:04.990 Reservation: Not Supported 00:35:04.990 Namespace Sharing Capabilities: Private 00:35:04.990 Size (in LBAs): 1310720 (5GiB) 00:35:04.990 Capacity (in LBAs): 1310720 (5GiB) 00:35:04.990 Utilization (in LBAs): 1310720 (5GiB) 00:35:04.990 Thin Provisioning: Not Supported 00:35:04.990 Per-NS Atomic Units: No 00:35:04.990 Maximum Single Source Range Length: 128 00:35:04.990 Maximum Copy Length: 128 00:35:04.990 Maximum Source Range Count: 128 00:35:04.990 NGUID/EUI64 Never Reused: No 00:35:04.990 Namespace Write Protected: No 00:35:04.990 Number of LBA Formats: 8 00:35:04.990 Current LBA Format: LBA Format #04 00:35:04.990 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:04.990 LBA Format #01: Data Size: 512 Metadata Size: 8 00:35:04.990 LBA Format #02: Data Size: 512 Metadata Size: 16 00:35:04.990 LBA Format #03: Data Size: 512 Metadata Size: 64 00:35:04.990 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:35:04.990 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:35:04.990 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:35:04.990 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:35:04.990 00:35:04.990 NVM Specific Namespace Data 00:35:04.990 =========================== 00:35:04.990 Logical Block Storage Tag Mask: 0 00:35:04.990 Protection Information Capabilities: 00:35:04.990 16b Guard Protection Information Storage Tag Support: No 00:35:04.990 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:35:04.990 Storage Tag Check Read Support: No 00:35:04.990 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.990 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.990 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.990 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.990 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.990 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.990 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.990 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:35:04.990 ************************************ 00:35:04.990 END TEST nvme_identify 00:35:04.990 ************************************ 00:35:04.990 00:35:04.990 real 0m0.645s 00:35:04.990 user 0m0.217s 00:35:04.990 sys 0m0.362s 00:35:04.990 15:29:00 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:04.990 15:29:00 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:35:04.990 15:29:00 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:04.990 15:29:00 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:35:04.990 15:29:00 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:04.990 15:29:00 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:04.990 15:29:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:04.990 ************************************ 00:35:04.990 START TEST nvme_perf 00:35:04.990 ************************************ 00:35:04.990 15:29:00 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:35:04.990 15:29:00 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:35:06.367 Initializing NVMe Controllers 00:35:06.367 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:06.367 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:35:06.367 Initialization complete. Launching workers. 00:35:06.367 ======================================================== 00:35:06.367 Latency(us) 00:35:06.367 Device Information : IOPS MiB/s Average min max 00:35:06.367 PCIE (0000:00:10.0) NSID 1 from core 0: 84546.55 990.78 1512.50 670.70 6735.69 00:35:06.367 ======================================================== 00:35:06.367 Total : 84546.55 990.78 1512.50 670.70 6735.69 00:35:06.367 00:35:06.367 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:35:06.367 ================================================================================= 00:35:06.367 1.00000% : 881.615us 00:35:06.367 10.00000% : 1053.257us 00:35:06.367 25.00000% : 1224.899us 00:35:06.367 50.00000% : 1490.164us 00:35:06.367 75.00000% : 1755.429us 00:35:06.367 90.00000% : 1997.288us 00:35:06.367 95.00000% : 2106.514us 00:35:06.367 98.00000% : 2262.552us 00:35:06.367 99.00000% : 2465.402us 00:35:06.367 99.50000% : 2808.686us 00:35:06.367 99.90000% : 4462.690us 00:35:06.367 99.99000% : 6553.600us 00:35:06.367 99.99900% : 6740.846us 00:35:06.367 99.99990% : 6740.846us 00:35:06.367 99.99999% : 6740.846us 00:35:06.367 00:35:06.367 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:35:06.367 ============================================================================== 00:35:06.367 Range in us Cumulative IO count 00:35:06.367 667.063 - 670.964: 0.0012% ( 1) 00:35:06.367 670.964 - 674.865: 0.0024% ( 1) 00:35:06.367 674.865 - 678.766: 0.0071% ( 4) 00:35:06.367 682.667 - 686.568: 0.0118% ( 4) 00:35:06.367 690.469 - 694.370: 0.0142% ( 2) 00:35:06.367 694.370 - 698.270: 0.0165% ( 2) 00:35:06.367 698.270 - 702.171: 0.0236% ( 6) 00:35:06.367 702.171 - 706.072: 0.0307% ( 6) 00:35:06.367 706.072 - 709.973: 0.0331% ( 2) 00:35:06.367 709.973 - 713.874: 0.0402% ( 6) 00:35:06.367 713.874 - 717.775: 0.0461% ( 5) 00:35:06.367 717.775 - 721.676: 0.0532% ( 6) 00:35:06.367 721.676 - 725.577: 0.0579% ( 4) 00:35:06.367 725.577 - 729.478: 0.0638% ( 5) 00:35:06.367 729.478 - 733.379: 0.0709% ( 6) 00:35:06.367 733.379 - 737.280: 0.0768% ( 5) 00:35:06.367 737.280 - 741.181: 0.0816% ( 4) 00:35:06.367 741.181 - 745.082: 0.0946% ( 11) 00:35:06.367 745.082 - 748.983: 0.1028% ( 7) 00:35:06.367 748.983 - 752.884: 0.1135% ( 9) 00:35:06.367 752.884 - 756.785: 0.1229% ( 8) 00:35:06.367 756.785 - 760.686: 0.1324% ( 8) 00:35:06.367 760.686 - 764.587: 0.1513% ( 16) 00:35:06.367 764.587 - 768.488: 0.1631% ( 10) 00:35:06.367 768.488 - 772.389: 0.1749% ( 10) 00:35:06.367 772.389 - 776.290: 0.1867% ( 10) 00:35:06.367 776.290 - 780.190: 0.2045% ( 15) 00:35:06.367 780.190 - 784.091: 0.2222% ( 15) 00:35:06.367 784.091 - 787.992: 0.2364% ( 12) 00:35:06.367 787.992 - 791.893: 0.2541% ( 15) 00:35:06.367 791.893 - 795.794: 0.2718% ( 15) 00:35:06.367 795.794 - 799.695: 0.2872% ( 13) 00:35:06.367 799.695 - 803.596: 0.3026% ( 13) 00:35:06.367 803.596 - 807.497: 0.3227% ( 17) 00:35:06.367 807.497 - 811.398: 0.3475% ( 21) 00:35:06.367 811.398 - 815.299: 0.3723% ( 21) 00:35:06.367 815.299 - 819.200: 0.3971% ( 21) 00:35:06.367 819.200 - 823.101: 0.4137% ( 14) 00:35:06.367 823.101 - 827.002: 0.4338% ( 17) 00:35:06.367 827.002 - 830.903: 0.4657% ( 27) 00:35:06.367 830.903 - 834.804: 0.4917% ( 22) 00:35:06.367 834.804 - 838.705: 0.5165% ( 21) 00:35:06.367 838.705 - 842.606: 0.5425% ( 22) 00:35:06.367 842.606 - 846.507: 0.5815% ( 33) 00:35:06.367 846.507 - 850.408: 0.6241% ( 36) 00:35:06.367 850.408 - 854.309: 0.6501% ( 22) 00:35:06.367 854.309 - 858.210: 0.6843% ( 29) 00:35:06.367 858.210 - 862.110: 0.7257% ( 35) 00:35:06.367 862.110 - 866.011: 0.7730% ( 40) 00:35:06.367 866.011 - 869.912: 0.8191% ( 39) 00:35:06.367 869.912 - 873.813: 0.8770% ( 49) 00:35:06.367 873.813 - 877.714: 0.9420% ( 55) 00:35:06.367 877.714 - 881.615: 1.0165% ( 63) 00:35:06.367 881.615 - 885.516: 1.0921% ( 64) 00:35:06.367 885.516 - 889.417: 1.1678% ( 64) 00:35:06.367 889.417 - 893.318: 1.2635% ( 81) 00:35:06.367 893.318 - 897.219: 1.3699% ( 90) 00:35:06.367 897.219 - 901.120: 1.4892% ( 101) 00:35:06.367 901.120 - 905.021: 1.6039% ( 97) 00:35:06.367 905.021 - 908.922: 1.7162% ( 95) 00:35:06.367 908.922 - 912.823: 1.8485% ( 112) 00:35:06.367 912.823 - 916.724: 1.9987% ( 127) 00:35:06.367 916.724 - 920.625: 2.1488% ( 127) 00:35:06.367 920.625 - 924.526: 2.2918% ( 121) 00:35:06.367 924.526 - 928.427: 2.4620% ( 144) 00:35:06.367 928.427 - 932.328: 2.6204% ( 134) 00:35:06.367 932.328 - 936.229: 2.7858% ( 140) 00:35:06.367 936.229 - 940.130: 2.9761% ( 161) 00:35:06.367 940.130 - 944.030: 3.1664% ( 161) 00:35:06.367 944.030 - 947.931: 3.3638% ( 167) 00:35:06.367 947.931 - 951.832: 3.5765% ( 180) 00:35:06.367 951.832 - 955.733: 3.7739% ( 167) 00:35:06.367 955.733 - 959.634: 3.9548% ( 153) 00:35:06.367 959.634 - 963.535: 4.1805% ( 191) 00:35:06.367 963.535 - 967.436: 4.4027% ( 188) 00:35:06.367 967.436 - 971.337: 4.6214% ( 185) 00:35:06.367 971.337 - 975.238: 4.8483% ( 192) 00:35:06.367 975.238 - 979.139: 5.0729% ( 190) 00:35:06.367 979.139 - 983.040: 5.3435% ( 229) 00:35:06.367 983.040 - 986.941: 5.5799% ( 200) 00:35:06.367 986.941 - 990.842: 5.8317% ( 213) 00:35:06.367 990.842 - 994.743: 6.0551% ( 189) 00:35:06.367 994.743 - 998.644: 6.3186% ( 223) 00:35:06.367 998.644 - 1006.446: 6.8789% ( 474) 00:35:06.367 1006.446 - 1014.248: 7.3954% ( 437) 00:35:06.367 1014.248 - 1022.050: 7.9781% ( 493) 00:35:06.367 1022.050 - 1029.851: 8.5537% ( 487) 00:35:06.367 1029.851 - 1037.653: 9.1186% ( 478) 00:35:06.367 1037.653 - 1045.455: 9.7581% ( 541) 00:35:06.367 1045.455 - 1053.257: 10.3289% ( 483) 00:35:06.367 1053.257 - 1061.059: 10.9518% ( 527) 00:35:06.367 1061.059 - 1068.861: 11.5806% ( 532) 00:35:06.367 1068.861 - 1076.663: 12.2106% ( 533) 00:35:06.367 1076.663 - 1084.465: 12.8784% ( 565) 00:35:06.367 1084.465 - 1092.267: 13.5544% ( 572) 00:35:06.367 1092.267 - 1100.069: 14.2376% ( 578) 00:35:06.367 1100.069 - 1107.870: 14.8912% ( 553) 00:35:06.367 1107.870 - 1115.672: 15.5921% ( 593) 00:35:06.367 1115.672 - 1123.474: 16.2622% ( 567) 00:35:06.367 1123.474 - 1131.276: 16.9679% ( 597) 00:35:06.367 1131.276 - 1139.078: 17.6676% ( 592) 00:35:06.367 1139.078 - 1146.880: 18.3744% ( 598) 00:35:06.367 1146.880 - 1154.682: 19.1083% ( 621) 00:35:06.367 1154.682 - 1162.484: 19.7809% ( 569) 00:35:06.367 1162.484 - 1170.286: 20.5113% ( 618) 00:35:06.367 1170.286 - 1178.088: 21.2039% ( 586) 00:35:06.367 1178.088 - 1185.890: 21.9379% ( 621) 00:35:06.367 1185.890 - 1193.691: 22.6636% ( 614) 00:35:06.367 1193.691 - 1201.493: 23.3574% ( 587) 00:35:06.367 1201.493 - 1209.295: 24.1198% ( 645) 00:35:06.367 1209.295 - 1217.097: 24.7805% ( 559) 00:35:06.367 1217.097 - 1224.899: 25.5759% ( 673) 00:35:06.367 1224.899 - 1232.701: 26.2461% ( 567) 00:35:06.367 1232.701 - 1240.503: 26.9895% ( 629) 00:35:06.367 1240.503 - 1248.305: 27.6691% ( 575) 00:35:06.367 1248.305 - 1256.107: 28.4315% ( 645) 00:35:06.367 1256.107 - 1263.909: 29.1583% ( 615) 00:35:06.367 1263.909 - 1271.710: 29.8734% ( 605) 00:35:06.367 1271.710 - 1279.512: 30.6299% ( 640) 00:35:06.367 1279.512 - 1287.314: 31.3225% ( 586) 00:35:06.367 1287.314 - 1295.116: 32.0789% ( 640) 00:35:06.367 1295.116 - 1302.918: 32.7668% ( 582) 00:35:06.367 1302.918 - 1310.720: 33.5457% ( 659) 00:35:06.367 1310.720 - 1318.522: 34.2288% ( 578) 00:35:06.367 1318.522 - 1326.324: 34.9286% ( 592) 00:35:06.367 1326.324 - 1334.126: 35.6838% ( 639) 00:35:06.367 1334.126 - 1341.928: 36.3965% ( 603) 00:35:06.367 1341.928 - 1349.730: 37.1459% ( 634) 00:35:06.367 1349.730 - 1357.531: 37.8420% ( 589) 00:35:06.367 1357.531 - 1365.333: 38.5996% ( 641) 00:35:06.367 1365.333 - 1373.135: 39.3194% ( 609) 00:35:06.367 1373.135 - 1380.937: 40.0274% ( 599) 00:35:06.367 1380.937 - 1388.739: 40.7531% ( 614) 00:35:06.367 1388.739 - 1396.541: 41.5072% ( 638) 00:35:06.367 1396.541 - 1404.343: 42.2140% ( 598) 00:35:06.367 1404.343 - 1412.145: 42.9232% ( 600) 00:35:06.367 1412.145 - 1419.947: 43.6666% ( 629) 00:35:06.367 1419.947 - 1427.749: 44.3899% ( 612) 00:35:06.367 1427.749 - 1435.550: 45.1310% ( 627) 00:35:06.367 1435.550 - 1443.352: 45.8615% ( 618) 00:35:06.367 1443.352 - 1451.154: 46.5907% ( 617) 00:35:06.367 1451.154 - 1458.956: 47.3377% ( 632) 00:35:06.367 1458.956 - 1466.758: 48.0575% ( 609) 00:35:06.367 1466.758 - 1474.560: 48.7950% ( 624) 00:35:06.367 1474.560 - 1482.362: 49.5266% ( 619) 00:35:06.367 1482.362 - 1490.164: 50.2878% ( 644) 00:35:06.367 1490.164 - 1497.966: 50.9911% ( 595) 00:35:06.367 1497.966 - 1505.768: 51.7593% ( 650) 00:35:06.367 1505.768 - 1513.570: 52.5027% ( 629) 00:35:06.367 1513.570 - 1521.371: 53.2497% ( 632) 00:35:06.367 1521.371 - 1529.173: 53.9896% ( 626) 00:35:06.367 1529.173 - 1536.975: 54.7248% ( 622) 00:35:06.367 1536.975 - 1544.777: 55.4848% ( 643) 00:35:06.367 1544.777 - 1552.579: 56.2105% ( 614) 00:35:06.367 1552.579 - 1560.381: 56.9634% ( 637) 00:35:06.367 1560.381 - 1568.183: 57.6914% ( 616) 00:35:06.367 1568.183 - 1575.985: 58.4526% ( 644) 00:35:06.367 1575.985 - 1583.787: 59.1866% ( 621) 00:35:06.367 1583.787 - 1591.589: 59.9158% ( 617) 00:35:06.367 1591.589 - 1599.390: 60.6711% ( 639) 00:35:06.367 1599.390 - 1607.192: 61.3980% ( 615) 00:35:06.367 1607.192 - 1614.994: 62.1237% ( 614) 00:35:06.367 1614.994 - 1622.796: 62.8482% ( 613) 00:35:06.367 1622.796 - 1630.598: 63.5716% ( 612) 00:35:06.367 1630.598 - 1638.400: 64.3103% ( 625) 00:35:06.367 1638.400 - 1646.202: 65.0360% ( 614) 00:35:06.367 1646.202 - 1654.004: 65.7759% ( 626) 00:35:06.367 1654.004 - 1661.806: 66.4957% ( 609) 00:35:06.367 1661.806 - 1669.608: 67.2167% ( 610) 00:35:06.367 1669.608 - 1677.410: 67.9388% ( 611) 00:35:06.367 1677.410 - 1685.211: 68.6740% ( 622) 00:35:06.367 1685.211 - 1693.013: 69.3701% ( 589) 00:35:06.367 1693.013 - 1700.815: 70.1195% ( 634) 00:35:06.367 1700.815 - 1708.617: 70.8074% ( 582) 00:35:06.367 1708.617 - 1716.419: 71.5461% ( 625) 00:35:06.367 1716.419 - 1724.221: 72.2517% ( 597) 00:35:06.367 1724.221 - 1732.023: 72.9869% ( 622) 00:35:06.367 1732.023 - 1739.825: 73.6889% ( 594) 00:35:06.367 1739.825 - 1747.627: 74.3898% ( 593) 00:35:06.367 1747.627 - 1755.429: 75.0718% ( 577) 00:35:06.367 1755.429 - 1763.230: 75.7301% ( 557) 00:35:06.367 1763.230 - 1771.032: 76.4216% ( 585) 00:35:06.367 1771.032 - 1778.834: 77.0362% ( 520) 00:35:06.367 1778.834 - 1786.636: 77.6839% ( 548) 00:35:06.367 1786.636 - 1794.438: 78.2630% ( 490) 00:35:06.367 1794.438 - 1802.240: 78.8623% ( 507) 00:35:06.367 1802.240 - 1810.042: 79.4142% ( 467) 00:35:06.367 1810.042 - 1817.844: 79.9556% ( 458) 00:35:06.367 1817.844 - 1825.646: 80.5075% ( 467) 00:35:06.367 1825.646 - 1833.448: 80.9720% ( 393) 00:35:06.367 1833.448 - 1841.250: 81.5346% ( 476) 00:35:06.367 1841.250 - 1849.051: 81.9909% ( 386) 00:35:06.367 1849.051 - 1856.853: 82.4873% ( 420) 00:35:06.367 1856.853 - 1864.655: 82.9423% ( 385) 00:35:06.367 1864.655 - 1872.457: 83.4104% ( 396) 00:35:06.367 1872.457 - 1880.259: 83.8760% ( 394) 00:35:06.367 1880.259 - 1888.061: 84.3287% ( 383) 00:35:06.367 1888.061 - 1895.863: 84.7873% ( 388) 00:35:06.367 1895.863 - 1903.665: 85.2128% ( 360) 00:35:06.367 1903.665 - 1911.467: 85.6655% ( 383) 00:35:06.367 1911.467 - 1919.269: 86.0650% ( 338) 00:35:06.367 1919.269 - 1927.070: 86.4905% ( 360) 00:35:06.367 1927.070 - 1934.872: 86.8876% ( 336) 00:35:06.367 1934.872 - 1942.674: 87.2942% ( 344) 00:35:06.367 1942.674 - 1950.476: 87.7067% ( 349) 00:35:06.367 1950.476 - 1958.278: 88.1003% ( 333) 00:35:06.367 1958.278 - 1966.080: 88.5104% ( 347) 00:35:06.367 1966.080 - 1973.882: 88.8886% ( 320) 00:35:06.367 1973.882 - 1981.684: 89.2751% ( 327) 00:35:06.367 1981.684 - 1989.486: 89.6616% ( 327) 00:35:06.367 1989.486 - 1997.288: 90.0351% ( 316) 00:35:06.367 1997.288 - 2012.891: 90.8140% ( 659) 00:35:06.367 2012.891 - 2028.495: 91.5633% ( 634) 00:35:06.367 2028.495 - 2044.099: 92.2914% ( 616) 00:35:06.367 2044.099 - 2059.703: 93.0396% ( 633) 00:35:06.367 2059.703 - 2075.307: 93.7535% ( 604) 00:35:06.367 2075.307 - 2090.910: 94.4236% ( 567) 00:35:06.367 2090.910 - 2106.514: 95.0229% ( 507) 00:35:06.367 2106.514 - 2122.118: 95.5654% ( 459) 00:35:06.367 2122.118 - 2137.722: 96.0240% ( 388) 00:35:06.367 2137.722 - 2153.326: 96.4329% ( 346) 00:35:06.367 2153.326 - 2168.930: 96.8029% ( 313) 00:35:06.367 2168.930 - 2184.533: 97.1019% ( 253) 00:35:06.367 2184.533 - 2200.137: 97.3879% ( 242) 00:35:06.367 2200.137 - 2215.741: 97.6279% ( 203) 00:35:06.367 2215.741 - 2231.345: 97.8170% ( 160) 00:35:06.367 2231.345 - 2246.949: 97.9836% ( 141) 00:35:06.367 2246.949 - 2262.552: 98.1302% ( 124) 00:35:06.367 2262.552 - 2278.156: 98.2496% ( 101) 00:35:06.367 2278.156 - 2293.760: 98.3453% ( 81) 00:35:06.367 2293.760 - 2309.364: 98.4363% ( 77) 00:35:06.367 2309.364 - 2324.968: 98.5155% ( 67) 00:35:06.367 2324.968 - 2340.571: 98.5864% ( 60) 00:35:06.367 2340.571 - 2356.175: 98.6561% ( 59) 00:35:06.367 2356.175 - 2371.779: 98.7176% ( 52) 00:35:06.367 2371.779 - 2387.383: 98.7826% ( 55) 00:35:06.367 2387.383 - 2402.987: 98.8405% ( 49) 00:35:06.367 2402.987 - 2418.590: 98.8866% ( 39) 00:35:06.367 2418.590 - 2434.194: 98.9339% ( 40) 00:35:06.367 2434.194 - 2449.798: 98.9729% ( 33) 00:35:06.367 2449.798 - 2465.402: 99.0107% ( 32) 00:35:06.367 2465.402 - 2481.006: 99.0403% ( 25) 00:35:06.367 2481.006 - 2496.610: 99.0757% ( 30) 00:35:06.367 2496.610 - 2512.213: 99.1100% ( 29) 00:35:06.367 2512.213 - 2527.817: 99.1443% ( 29) 00:35:06.367 2527.817 - 2543.421: 99.1750% ( 26) 00:35:06.367 2543.421 - 2559.025: 99.2093% ( 29) 00:35:06.367 2559.025 - 2574.629: 99.2377% ( 24) 00:35:06.367 2574.629 - 2590.232: 99.2719% ( 29) 00:35:06.367 2590.232 - 2605.836: 99.2991% ( 23) 00:35:06.367 2605.836 - 2621.440: 99.3298% ( 26) 00:35:06.367 2621.440 - 2637.044: 99.3499% ( 17) 00:35:06.367 2637.044 - 2652.648: 99.3736% ( 20) 00:35:06.367 2652.648 - 2668.251: 99.3913% ( 15) 00:35:06.367 2668.251 - 2683.855: 99.4090% ( 15) 00:35:06.367 2683.855 - 2699.459: 99.4268% ( 15) 00:35:06.367 2699.459 - 2715.063: 99.4457% ( 16) 00:35:06.367 2715.063 - 2730.667: 99.4599% ( 12) 00:35:06.367 2730.667 - 2746.270: 99.4729% ( 11) 00:35:06.367 2746.270 - 2761.874: 99.4811% ( 7) 00:35:06.367 2761.874 - 2777.478: 99.4882% ( 6) 00:35:06.367 2777.478 - 2793.082: 99.4953% ( 6) 00:35:06.367 2793.082 - 2808.686: 99.5048% ( 8) 00:35:06.367 2808.686 - 2824.290: 99.5119% ( 6) 00:35:06.367 2824.290 - 2839.893: 99.5201% ( 7) 00:35:06.367 2839.893 - 2855.497: 99.5249% ( 4) 00:35:06.367 2855.497 - 2871.101: 99.5320% ( 6) 00:35:06.367 2871.101 - 2886.705: 99.5379% ( 5) 00:35:06.367 2886.705 - 2902.309: 99.5438% ( 5) 00:35:06.367 2902.309 - 2917.912: 99.5497% ( 5) 00:35:06.367 2917.912 - 2933.516: 99.5556% ( 5) 00:35:06.367 2933.516 - 2949.120: 99.5615% ( 5) 00:35:06.367 2949.120 - 2964.724: 99.5674% ( 5) 00:35:06.367 2964.724 - 2980.328: 99.5721% ( 4) 00:35:06.367 2980.328 - 2995.931: 99.5792% ( 6) 00:35:06.367 2995.931 - 3011.535: 99.5828% ( 3) 00:35:06.367 3011.535 - 3027.139: 99.5887% ( 5) 00:35:06.367 3027.139 - 3042.743: 99.5946% ( 5) 00:35:06.367 3042.743 - 3058.347: 99.6005% ( 5) 00:35:06.367 3058.347 - 3073.950: 99.6052% ( 4) 00:35:06.367 3073.950 - 3089.554: 99.6111% ( 5) 00:35:06.367 3089.554 - 3105.158: 99.6135% ( 2) 00:35:06.368 3105.158 - 3120.762: 99.6182% ( 4) 00:35:06.368 3120.762 - 3136.366: 99.6241% ( 5) 00:35:06.368 3136.366 - 3151.970: 99.6312% ( 6) 00:35:06.368 3151.970 - 3167.573: 99.6360% ( 4) 00:35:06.368 3167.573 - 3183.177: 99.6431% ( 6) 00:35:06.368 3183.177 - 3198.781: 99.6490% ( 5) 00:35:06.368 3198.781 - 3214.385: 99.6549% ( 5) 00:35:06.368 3214.385 - 3229.989: 99.6596% ( 4) 00:35:06.368 3229.989 - 3245.592: 99.6643% ( 4) 00:35:06.368 3245.592 - 3261.196: 99.6702% ( 5) 00:35:06.368 3261.196 - 3276.800: 99.6761% ( 5) 00:35:06.368 3276.800 - 3292.404: 99.6832% ( 6) 00:35:06.368 3292.404 - 3308.008: 99.6892% ( 5) 00:35:06.368 3308.008 - 3323.611: 99.6951% ( 5) 00:35:06.368 3323.611 - 3339.215: 99.7022% ( 6) 00:35:06.368 3339.215 - 3354.819: 99.7057% ( 3) 00:35:06.368 3354.819 - 3370.423: 99.7116% ( 5) 00:35:06.368 3370.423 - 3386.027: 99.7140% ( 2) 00:35:06.368 3386.027 - 3401.630: 99.7199% ( 5) 00:35:06.368 3401.630 - 3417.234: 99.7246% ( 4) 00:35:06.368 3417.234 - 3432.838: 99.7293% ( 4) 00:35:06.368 3432.838 - 3448.442: 99.7329% ( 3) 00:35:06.368 3448.442 - 3464.046: 99.7388% ( 5) 00:35:06.368 3464.046 - 3479.650: 99.7435% ( 4) 00:35:06.368 3479.650 - 3495.253: 99.7482% ( 4) 00:35:06.368 3495.253 - 3510.857: 99.7518% ( 3) 00:35:06.368 3510.857 - 3526.461: 99.7565% ( 4) 00:35:06.368 3526.461 - 3542.065: 99.7624% ( 5) 00:35:06.368 3542.065 - 3557.669: 99.7672% ( 4) 00:35:06.368 3557.669 - 3573.272: 99.7695% ( 2) 00:35:06.368 3573.272 - 3588.876: 99.7731% ( 3) 00:35:06.368 3588.876 - 3604.480: 99.7754% ( 2) 00:35:06.368 3604.480 - 3620.084: 99.7802% ( 4) 00:35:06.368 3620.084 - 3635.688: 99.7849% ( 4) 00:35:06.368 3635.688 - 3651.291: 99.7884% ( 3) 00:35:06.368 3651.291 - 3666.895: 99.7932% ( 4) 00:35:06.368 3666.895 - 3682.499: 99.7979% ( 4) 00:35:06.368 3682.499 - 3698.103: 99.8003% ( 2) 00:35:06.368 3698.103 - 3713.707: 99.8026% ( 2) 00:35:06.368 3713.707 - 3729.310: 99.8050% ( 2) 00:35:06.368 3729.310 - 3744.914: 99.8085% ( 3) 00:35:06.368 3744.914 - 3760.518: 99.8121% ( 3) 00:35:06.368 3760.518 - 3776.122: 99.8133% ( 1) 00:35:06.368 3776.122 - 3791.726: 99.8168% ( 3) 00:35:06.368 3791.726 - 3807.330: 99.8203% ( 3) 00:35:06.368 3807.330 - 3822.933: 99.8227% ( 2) 00:35:06.368 3822.933 - 3838.537: 99.8263% ( 3) 00:35:06.368 3838.537 - 3854.141: 99.8298% ( 3) 00:35:06.368 3854.141 - 3869.745: 99.8322% ( 2) 00:35:06.368 3869.745 - 3885.349: 99.8357% ( 3) 00:35:06.368 3885.349 - 3900.952: 99.8381% ( 2) 00:35:06.368 3900.952 - 3916.556: 99.8404% ( 2) 00:35:06.368 3916.556 - 3932.160: 99.8440% ( 3) 00:35:06.368 3932.160 - 3947.764: 99.8452% ( 1) 00:35:06.368 3947.764 - 3963.368: 99.8487% ( 3) 00:35:06.368 3963.368 - 3978.971: 99.8523% ( 3) 00:35:06.368 3978.971 - 3994.575: 99.8534% ( 1) 00:35:06.368 3994.575 - 4025.783: 99.8570% ( 3) 00:35:06.368 4025.783 - 4056.990: 99.8617% ( 4) 00:35:06.368 4056.990 - 4088.198: 99.8641% ( 2) 00:35:06.368 4088.198 - 4119.406: 99.8688% ( 4) 00:35:06.368 4119.406 - 4150.613: 99.8724% ( 3) 00:35:06.368 4150.613 - 4181.821: 99.8759% ( 3) 00:35:06.368 4181.821 - 4213.029: 99.8794% ( 3) 00:35:06.368 4213.029 - 4244.236: 99.8830% ( 3) 00:35:06.368 4244.236 - 4275.444: 99.8865% ( 3) 00:35:06.368 4275.444 - 4306.651: 99.8913% ( 4) 00:35:06.368 4306.651 - 4337.859: 99.8948% ( 3) 00:35:06.368 4337.859 - 4369.067: 99.8960% ( 1) 00:35:06.368 4369.067 - 4400.274: 99.8972% ( 1) 00:35:06.368 4400.274 - 4431.482: 99.8995% ( 2) 00:35:06.368 4431.482 - 4462.690: 99.9007% ( 1) 00:35:06.368 4462.690 - 4493.897: 99.9031% ( 2) 00:35:06.368 4493.897 - 4525.105: 99.9043% ( 1) 00:35:06.368 4525.105 - 4556.312: 99.9066% ( 2) 00:35:06.368 4556.312 - 4587.520: 99.9078% ( 1) 00:35:06.368 4587.520 - 4618.728: 99.9090% ( 1) 00:35:06.368 4618.728 - 4649.935: 99.9102% ( 1) 00:35:06.368 4649.935 - 4681.143: 99.9114% ( 1) 00:35:06.368 4681.143 - 4712.350: 99.9137% ( 2) 00:35:06.368 4712.350 - 4743.558: 99.9149% ( 1) 00:35:06.368 4743.558 - 4774.766: 99.9173% ( 2) 00:35:06.368 4774.766 - 4805.973: 99.9184% ( 1) 00:35:06.368 4805.973 - 4837.181: 99.9208% ( 2) 00:35:06.368 4837.181 - 4868.389: 99.9220% ( 1) 00:35:06.368 4868.389 - 4899.596: 99.9232% ( 1) 00:35:06.368 4899.596 - 4930.804: 99.9244% ( 1) 00:35:06.368 4930.804 - 4962.011: 99.9267% ( 2) 00:35:06.368 4962.011 - 4993.219: 99.9279% ( 1) 00:35:06.368 4993.219 - 5024.427: 99.9303% ( 2) 00:35:06.368 5024.427 - 5055.634: 99.9326% ( 2) 00:35:06.368 5055.634 - 5086.842: 99.9338% ( 1) 00:35:06.368 5086.842 - 5118.050: 99.9350% ( 1) 00:35:06.368 5118.050 - 5149.257: 99.9374% ( 2) 00:35:06.368 5149.257 - 5180.465: 99.9385% ( 1) 00:35:06.368 5180.465 - 5211.672: 99.9409% ( 2) 00:35:06.368 5211.672 - 5242.880: 99.9421% ( 1) 00:35:06.368 5242.880 - 5274.088: 99.9444% ( 2) 00:35:06.368 5274.088 - 5305.295: 99.9456% ( 1) 00:35:06.368 5305.295 - 5336.503: 99.9480% ( 2) 00:35:06.368 5336.503 - 5367.710: 99.9492% ( 1) 00:35:06.368 5367.710 - 5398.918: 99.9504% ( 1) 00:35:06.368 5398.918 - 5430.126: 99.9515% ( 1) 00:35:06.368 5430.126 - 5461.333: 99.9539% ( 2) 00:35:06.368 5461.333 - 5492.541: 99.9551% ( 1) 00:35:06.368 5492.541 - 5523.749: 99.9563% ( 1) 00:35:06.368 5898.240 - 5929.448: 99.9575% ( 1) 00:35:06.368 5929.448 - 5960.655: 99.9586% ( 1) 00:35:06.368 5960.655 - 5991.863: 99.9610% ( 2) 00:35:06.368 5991.863 - 6023.070: 99.9622% ( 1) 00:35:06.368 6023.070 - 6054.278: 99.9645% ( 2) 00:35:06.368 6054.278 - 6085.486: 99.9669% ( 2) 00:35:06.368 6085.486 - 6116.693: 99.9681% ( 1) 00:35:06.368 6116.693 - 6147.901: 99.9693% ( 1) 00:35:06.368 6147.901 - 6179.109: 99.9716% ( 2) 00:35:06.368 6179.109 - 6210.316: 99.9728% ( 1) 00:35:06.368 6210.316 - 6241.524: 99.9740% ( 1) 00:35:06.368 6241.524 - 6272.731: 99.9764% ( 2) 00:35:06.368 6272.731 - 6303.939: 99.9775% ( 1) 00:35:06.368 6303.939 - 6335.147: 99.9799% ( 2) 00:35:06.368 6335.147 - 6366.354: 99.9811% ( 1) 00:35:06.368 6366.354 - 6397.562: 99.9835% ( 2) 00:35:06.368 6397.562 - 6428.770: 99.9858% ( 2) 00:35:06.368 6459.977 - 6491.185: 99.9882% ( 2) 00:35:06.368 6491.185 - 6522.392: 99.9894% ( 1) 00:35:06.368 6522.392 - 6553.600: 99.9917% ( 2) 00:35:06.368 6553.600 - 6584.808: 99.9929% ( 1) 00:35:06.368 6584.808 - 6616.015: 99.9953% ( 2) 00:35:06.368 6616.015 - 6647.223: 99.9976% ( 2) 00:35:06.368 6647.223 - 6678.430: 99.9988% ( 1) 00:35:06.368 6709.638 - 6740.846: 100.0000% ( 1) 00:35:06.368 00:35:06.368 15:29:01 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:35:07.742 Initializing NVMe Controllers 00:35:07.743 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:07.743 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:35:07.743 Initialization complete. Launching workers. 00:35:07.743 ======================================================== 00:35:07.743 Latency(us) 00:35:07.743 Device Information : IOPS MiB/s Average min max 00:35:07.743 PCIE (0000:00:10.0) NSID 1 from core 0: 79531.26 932.01 1608.17 556.34 7312.49 00:35:07.743 ======================================================== 00:35:07.743 Total : 79531.26 932.01 1608.17 556.34 7312.49 00:35:07.743 00:35:07.743 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:35:07.743 ================================================================================= 00:35:07.743 1.00000% : 1029.851us 00:35:07.743 10.00000% : 1240.503us 00:35:07.743 25.00000% : 1357.531us 00:35:07.743 50.00000% : 1521.371us 00:35:07.743 75.00000% : 1778.834us 00:35:07.743 90.00000% : 2059.703us 00:35:07.743 95.00000% : 2246.949us 00:35:07.743 98.00000% : 2637.044us 00:35:07.743 99.00000% : 3089.554us 00:35:07.743 99.50000% : 3963.368us 00:35:07.743 99.90000% : 5118.050us 00:35:07.743 99.99000% : 5929.448us 00:35:07.743 99.99900% : 7333.790us 00:35:07.743 99.99990% : 7333.790us 00:35:07.743 99.99999% : 7333.790us 00:35:07.743 00:35:07.743 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:35:07.743 ============================================================================== 00:35:07.743 Range in us Cumulative IO count 00:35:07.743 553.935 - 557.836: 0.0013% ( 1) 00:35:07.743 557.836 - 561.737: 0.0025% ( 1) 00:35:07.743 573.440 - 577.341: 0.0038% ( 1) 00:35:07.743 592.945 - 596.846: 0.0050% ( 1) 00:35:07.743 604.648 - 608.549: 0.0063% ( 1) 00:35:07.743 608.549 - 612.450: 0.0075% ( 1) 00:35:07.743 612.450 - 616.350: 0.0101% ( 2) 00:35:07.743 620.251 - 624.152: 0.0113% ( 1) 00:35:07.743 631.954 - 635.855: 0.0126% ( 1) 00:35:07.743 635.855 - 639.756: 0.0138% ( 1) 00:35:07.743 639.756 - 643.657: 0.0151% ( 1) 00:35:07.743 663.162 - 667.063: 0.0176% ( 2) 00:35:07.743 667.063 - 670.964: 0.0201% ( 2) 00:35:07.743 670.964 - 674.865: 0.0214% ( 1) 00:35:07.743 674.865 - 678.766: 0.0226% ( 1) 00:35:07.743 678.766 - 682.667: 0.0251% ( 2) 00:35:07.743 682.667 - 686.568: 0.0264% ( 1) 00:35:07.743 690.469 - 694.370: 0.0276% ( 1) 00:35:07.743 694.370 - 698.270: 0.0302% ( 2) 00:35:07.743 698.270 - 702.171: 0.0327% ( 2) 00:35:07.743 702.171 - 706.072: 0.0339% ( 1) 00:35:07.743 709.973 - 713.874: 0.0352% ( 1) 00:35:07.743 713.874 - 717.775: 0.0377% ( 2) 00:35:07.743 721.676 - 725.577: 0.0415% ( 3) 00:35:07.743 725.577 - 729.478: 0.0427% ( 1) 00:35:07.743 733.379 - 737.280: 0.0440% ( 1) 00:35:07.743 737.280 - 741.181: 0.0465% ( 2) 00:35:07.743 745.082 - 748.983: 0.0478% ( 1) 00:35:07.743 748.983 - 752.884: 0.0503% ( 2) 00:35:07.743 752.884 - 756.785: 0.0540% ( 3) 00:35:07.743 756.785 - 760.686: 0.0591% ( 4) 00:35:07.743 760.686 - 764.587: 0.0616% ( 2) 00:35:07.743 764.587 - 768.488: 0.0654% ( 3) 00:35:07.743 768.488 - 772.389: 0.0666% ( 1) 00:35:07.743 772.389 - 776.290: 0.0691% ( 2) 00:35:07.743 776.290 - 780.190: 0.0716% ( 2) 00:35:07.743 784.091 - 787.992: 0.0754% ( 3) 00:35:07.743 787.992 - 791.893: 0.0792% ( 3) 00:35:07.743 791.893 - 795.794: 0.0829% ( 3) 00:35:07.743 795.794 - 799.695: 0.0842% ( 1) 00:35:07.743 799.695 - 803.596: 0.0880% ( 3) 00:35:07.743 803.596 - 807.497: 0.0892% ( 1) 00:35:07.743 807.497 - 811.398: 0.0905% ( 1) 00:35:07.743 811.398 - 815.299: 0.0943% ( 3) 00:35:07.743 815.299 - 819.200: 0.0968% ( 2) 00:35:07.743 823.101 - 827.002: 0.1018% ( 4) 00:35:07.743 827.002 - 830.903: 0.1119% ( 8) 00:35:07.743 830.903 - 834.804: 0.1131% ( 1) 00:35:07.743 834.804 - 838.705: 0.1194% ( 5) 00:35:07.743 838.705 - 842.606: 0.1219% ( 2) 00:35:07.743 842.606 - 846.507: 0.1232% ( 1) 00:35:07.743 846.507 - 850.408: 0.1332% ( 8) 00:35:07.743 850.408 - 854.309: 0.1357% ( 2) 00:35:07.743 854.309 - 858.210: 0.1370% ( 1) 00:35:07.743 858.210 - 862.110: 0.1445% ( 6) 00:35:07.743 862.110 - 866.011: 0.1496% ( 4) 00:35:07.743 866.011 - 869.912: 0.1533% ( 3) 00:35:07.743 869.912 - 873.813: 0.1546% ( 1) 00:35:07.743 873.813 - 877.714: 0.1571% ( 2) 00:35:07.743 877.714 - 881.615: 0.1697% ( 10) 00:35:07.743 881.615 - 885.516: 0.1760% ( 5) 00:35:07.743 885.516 - 889.417: 0.1810% ( 4) 00:35:07.743 889.417 - 893.318: 0.1948% ( 11) 00:35:07.743 893.318 - 897.219: 0.2036% ( 7) 00:35:07.743 897.219 - 901.120: 0.2162% ( 10) 00:35:07.743 901.120 - 905.021: 0.2212% ( 4) 00:35:07.743 905.021 - 908.922: 0.2312% ( 8) 00:35:07.743 908.922 - 912.823: 0.2463% ( 12) 00:35:07.743 912.823 - 916.724: 0.2576% ( 9) 00:35:07.743 916.724 - 920.625: 0.2702% ( 10) 00:35:07.743 920.625 - 924.526: 0.2828% ( 10) 00:35:07.743 924.526 - 928.427: 0.2953% ( 10) 00:35:07.743 928.427 - 932.328: 0.3104% ( 12) 00:35:07.743 932.328 - 936.229: 0.3255% ( 12) 00:35:07.743 936.229 - 940.130: 0.3444% ( 15) 00:35:07.743 940.130 - 944.030: 0.3544% ( 8) 00:35:07.743 944.030 - 947.931: 0.3821% ( 22) 00:35:07.743 947.931 - 951.832: 0.3959% ( 11) 00:35:07.743 951.832 - 955.733: 0.4147% ( 15) 00:35:07.743 955.733 - 959.634: 0.4286% ( 11) 00:35:07.743 959.634 - 963.535: 0.4524% ( 19) 00:35:07.743 963.535 - 967.436: 0.4726% ( 16) 00:35:07.743 967.436 - 971.337: 0.5002% ( 22) 00:35:07.743 971.337 - 975.238: 0.5241% ( 19) 00:35:07.743 975.238 - 979.139: 0.5593% ( 28) 00:35:07.743 979.139 - 983.040: 0.5844% ( 20) 00:35:07.743 983.040 - 986.941: 0.6108% ( 21) 00:35:07.743 986.941 - 990.842: 0.6397% ( 23) 00:35:07.743 990.842 - 994.743: 0.6749% ( 28) 00:35:07.743 994.743 - 998.644: 0.7101% ( 28) 00:35:07.743 998.644 - 1006.446: 0.7855% ( 60) 00:35:07.743 1006.446 - 1014.248: 0.8672% ( 65) 00:35:07.743 1014.248 - 1022.050: 0.9665% ( 79) 00:35:07.743 1022.050 - 1029.851: 1.0557% ( 71) 00:35:07.743 1029.851 - 1037.653: 1.1562% ( 80) 00:35:07.743 1037.653 - 1045.455: 1.2869% ( 104) 00:35:07.743 1045.455 - 1053.257: 1.4001% ( 90) 00:35:07.743 1053.257 - 1061.059: 1.5333% ( 106) 00:35:07.743 1061.059 - 1068.861: 1.6879% ( 123) 00:35:07.743 1068.861 - 1076.663: 1.8324% ( 115) 00:35:07.743 1076.663 - 1084.465: 2.0033% ( 136) 00:35:07.743 1084.465 - 1092.267: 2.1830% ( 143) 00:35:07.743 1092.267 - 1100.069: 2.4030% ( 175) 00:35:07.743 1100.069 - 1107.870: 2.6141% ( 168) 00:35:07.743 1107.870 - 1115.672: 2.8642% ( 199) 00:35:07.743 1115.672 - 1123.474: 3.0930% ( 182) 00:35:07.743 1123.474 - 1131.276: 3.3594% ( 212) 00:35:07.743 1131.276 - 1139.078: 3.7038% ( 274) 00:35:07.743 1139.078 - 1146.880: 4.0469% ( 273) 00:35:07.743 1146.880 - 1154.682: 4.4339% ( 308) 00:35:07.743 1154.682 - 1162.484: 4.8349% ( 319) 00:35:07.743 1162.484 - 1170.286: 5.2295% ( 314) 00:35:07.743 1170.286 - 1178.088: 5.6895% ( 366) 00:35:07.743 1178.088 - 1185.890: 6.2161% ( 419) 00:35:07.743 1185.890 - 1193.691: 6.7175% ( 399) 00:35:07.743 1193.691 - 1201.493: 7.2529% ( 426) 00:35:07.743 1201.493 - 1209.295: 7.8348% ( 463) 00:35:07.743 1209.295 - 1217.097: 8.4443% ( 485) 00:35:07.743 1217.097 - 1224.899: 9.1331% ( 548) 00:35:07.743 1224.899 - 1232.701: 9.8344% ( 558) 00:35:07.743 1232.701 - 1240.503: 10.5495% ( 569) 00:35:07.743 1240.503 - 1248.305: 11.2809% ( 582) 00:35:07.743 1248.305 - 1256.107: 12.0878% ( 642) 00:35:07.743 1256.107 - 1263.909: 12.9311% ( 671) 00:35:07.743 1263.909 - 1271.710: 13.7719% ( 669) 00:35:07.743 1271.710 - 1279.512: 14.7283% ( 761) 00:35:07.743 1279.512 - 1287.314: 15.7149% ( 785) 00:35:07.743 1287.314 - 1295.116: 16.5959% ( 701) 00:35:07.743 1295.116 - 1302.918: 17.5322% ( 745) 00:35:07.743 1302.918 - 1310.720: 18.5577% ( 816) 00:35:07.743 1310.720 - 1318.522: 19.5317% ( 775) 00:35:07.743 1318.522 - 1326.324: 20.6553% ( 894) 00:35:07.743 1326.324 - 1334.126: 21.7851% ( 899) 00:35:07.743 1334.126 - 1341.928: 22.8383% ( 838) 00:35:07.743 1341.928 - 1349.730: 23.9267% ( 866) 00:35:07.743 1349.730 - 1357.531: 25.1345% ( 961) 00:35:07.743 1357.531 - 1365.333: 26.2631% ( 898) 00:35:07.743 1365.333 - 1373.135: 27.5538% ( 1027) 00:35:07.743 1373.135 - 1380.937: 28.7389% ( 943) 00:35:07.743 1380.937 - 1388.739: 29.9631% ( 974) 00:35:07.743 1388.739 - 1396.541: 31.1608% ( 953) 00:35:07.743 1396.541 - 1404.343: 32.3748% ( 966) 00:35:07.743 1404.343 - 1412.145: 33.6366% ( 1004) 00:35:07.743 1412.145 - 1419.947: 34.8532% ( 968) 00:35:07.743 1419.947 - 1427.749: 36.0434% ( 947) 00:35:07.743 1427.749 - 1435.550: 37.3002% ( 1000) 00:35:07.743 1435.550 - 1443.352: 38.4891% ( 946) 00:35:07.743 1443.352 - 1451.154: 39.7308% ( 988) 00:35:07.743 1451.154 - 1458.956: 40.9285% ( 953) 00:35:07.743 1458.956 - 1466.758: 42.3173% ( 1105) 00:35:07.743 1466.758 - 1474.560: 43.5477% ( 979) 00:35:07.744 1474.560 - 1482.362: 44.7454% ( 953) 00:35:07.744 1482.362 - 1490.164: 45.8513% ( 880) 00:35:07.744 1490.164 - 1497.966: 47.0239% ( 933) 00:35:07.744 1497.966 - 1505.768: 48.1299% ( 880) 00:35:07.744 1505.768 - 1513.570: 49.3025% ( 933) 00:35:07.744 1513.570 - 1521.371: 50.3607% ( 842) 00:35:07.744 1521.371 - 1529.173: 51.5107% ( 915) 00:35:07.744 1529.173 - 1536.975: 52.5726% ( 845) 00:35:07.744 1536.975 - 1544.777: 53.6095% ( 825) 00:35:07.744 1544.777 - 1552.579: 54.5269% ( 730) 00:35:07.744 1552.579 - 1560.381: 55.4607% ( 743) 00:35:07.744 1560.381 - 1568.183: 56.3644% ( 719) 00:35:07.744 1568.183 - 1575.985: 57.2567% ( 710) 00:35:07.744 1575.985 - 1583.787: 58.1691% ( 726) 00:35:07.744 1583.787 - 1591.589: 59.0162% ( 674) 00:35:07.744 1591.589 - 1599.390: 59.9186% ( 718) 00:35:07.744 1599.390 - 1607.192: 60.7782% ( 684) 00:35:07.744 1607.192 - 1614.994: 61.6328% ( 680) 00:35:07.744 1614.994 - 1622.796: 62.3957% ( 607) 00:35:07.744 1622.796 - 1630.598: 63.2189% ( 655) 00:35:07.744 1630.598 - 1638.400: 63.9931% ( 616) 00:35:07.744 1638.400 - 1646.202: 64.7371% ( 592) 00:35:07.744 1646.202 - 1654.004: 65.4509% ( 568) 00:35:07.744 1654.004 - 1661.806: 66.2125% ( 606) 00:35:07.744 1661.806 - 1669.608: 66.9264% ( 568) 00:35:07.744 1669.608 - 1677.410: 67.6151% ( 548) 00:35:07.744 1677.410 - 1685.211: 68.2825% ( 531) 00:35:07.744 1685.211 - 1693.013: 68.9146% ( 503) 00:35:07.744 1693.013 - 1700.815: 69.4928% ( 460) 00:35:07.744 1700.815 - 1708.617: 70.1010% ( 484) 00:35:07.744 1708.617 - 1716.419: 70.7181% ( 491) 00:35:07.744 1716.419 - 1724.221: 71.3214% ( 480) 00:35:07.744 1724.221 - 1732.023: 71.9020% ( 462) 00:35:07.744 1732.023 - 1739.825: 72.4852% ( 464) 00:35:07.744 1739.825 - 1747.627: 73.0645% ( 461) 00:35:07.744 1747.627 - 1755.429: 73.6251% ( 446) 00:35:07.744 1755.429 - 1763.230: 74.2258% ( 478) 00:35:07.744 1763.230 - 1771.032: 74.7838% ( 444) 00:35:07.744 1771.032 - 1778.834: 75.2979% ( 409) 00:35:07.744 1778.834 - 1786.636: 75.8446% ( 435) 00:35:07.744 1786.636 - 1794.438: 76.3410% ( 395) 00:35:07.744 1794.438 - 1802.240: 76.8663% ( 418) 00:35:07.744 1802.240 - 1810.042: 77.3791% ( 408) 00:35:07.744 1810.042 - 1817.844: 77.8579% ( 381) 00:35:07.744 1817.844 - 1825.646: 78.3493% ( 391) 00:35:07.744 1825.646 - 1833.448: 78.8495% ( 398) 00:35:07.744 1833.448 - 1841.250: 79.3108% ( 367) 00:35:07.744 1841.250 - 1849.051: 79.7871% ( 379) 00:35:07.744 1849.051 - 1856.853: 80.2395% ( 360) 00:35:07.744 1856.853 - 1864.655: 80.7234% ( 385) 00:35:07.744 1864.655 - 1872.457: 81.1557% ( 344) 00:35:07.744 1872.457 - 1880.259: 81.6195% ( 369) 00:35:07.744 1880.259 - 1888.061: 82.0619% ( 352) 00:35:07.744 1888.061 - 1895.863: 82.4980% ( 347) 00:35:07.744 1895.863 - 1903.665: 82.9039% ( 323) 00:35:07.744 1903.665 - 1911.467: 83.3526% ( 357) 00:35:07.744 1911.467 - 1919.269: 83.7422% ( 310) 00:35:07.744 1919.269 - 1927.070: 84.1507% ( 325) 00:35:07.744 1927.070 - 1934.872: 84.5579% ( 324) 00:35:07.744 1934.872 - 1942.674: 84.9977% ( 350) 00:35:07.744 1942.674 - 1950.476: 85.3873% ( 310) 00:35:07.744 1950.476 - 1958.278: 85.7782% ( 311) 00:35:07.744 1958.278 - 1966.080: 86.1452% ( 292) 00:35:07.744 1966.080 - 1973.882: 86.5247% ( 302) 00:35:07.744 1973.882 - 1981.684: 86.8842% ( 286) 00:35:07.744 1981.684 - 1989.486: 87.2147% ( 263) 00:35:07.744 1989.486 - 1997.288: 87.5654% ( 279) 00:35:07.744 1997.288 - 2012.891: 88.2578% ( 551) 00:35:07.744 2012.891 - 2028.495: 88.8624% ( 481) 00:35:07.744 2028.495 - 2044.099: 89.4895% ( 499) 00:35:07.744 2044.099 - 2059.703: 90.0576% ( 452) 00:35:07.744 2059.703 - 2075.307: 90.6294% ( 455) 00:35:07.744 2075.307 - 2090.910: 91.1673% ( 428) 00:35:07.744 2090.910 - 2106.514: 91.7102% ( 432) 00:35:07.744 2106.514 - 2122.118: 92.2117% ( 399) 00:35:07.744 2122.118 - 2137.722: 92.7069% ( 394) 00:35:07.744 2137.722 - 2153.326: 93.1555% ( 357) 00:35:07.744 2153.326 - 2168.930: 93.5891% ( 345) 00:35:07.744 2168.930 - 2184.533: 93.9561% ( 292) 00:35:07.744 2184.533 - 2200.137: 94.3143% ( 285) 00:35:07.744 2200.137 - 2215.741: 94.6285% ( 250) 00:35:07.744 2215.741 - 2231.345: 94.9163% ( 229) 00:35:07.744 2231.345 - 2246.949: 95.1765% ( 207) 00:35:07.744 2246.949 - 2262.552: 95.4203% ( 194) 00:35:07.744 2262.552 - 2278.156: 95.6163% ( 156) 00:35:07.744 2278.156 - 2293.760: 95.8187% ( 161) 00:35:07.744 2293.760 - 2309.364: 96.0135% ( 155) 00:35:07.744 2309.364 - 2324.968: 96.1806% ( 133) 00:35:07.744 2324.968 - 2340.571: 96.3377% ( 125) 00:35:07.744 2340.571 - 2356.175: 96.4760% ( 110) 00:35:07.744 2356.175 - 2371.779: 96.6079% ( 105) 00:35:07.744 2371.779 - 2387.383: 96.7474% ( 111) 00:35:07.744 2387.383 - 2402.987: 96.8530% ( 84) 00:35:07.744 2402.987 - 2418.590: 96.9711% ( 94) 00:35:07.744 2418.590 - 2434.194: 97.0729% ( 81) 00:35:07.744 2434.194 - 2449.798: 97.1659% ( 74) 00:35:07.744 2449.798 - 2465.402: 97.2552% ( 71) 00:35:07.744 2465.402 - 2481.006: 97.3532% ( 78) 00:35:07.744 2481.006 - 2496.610: 97.4387% ( 68) 00:35:07.744 2496.610 - 2512.213: 97.5254% ( 69) 00:35:07.744 2512.213 - 2527.817: 97.6071% ( 65) 00:35:07.744 2527.817 - 2543.421: 97.6837% ( 61) 00:35:07.744 2543.421 - 2559.025: 97.7554% ( 57) 00:35:07.744 2559.025 - 2574.629: 97.8157% ( 48) 00:35:07.744 2574.629 - 2590.232: 97.8785% ( 50) 00:35:07.744 2590.232 - 2605.836: 97.9351% ( 45) 00:35:07.744 2605.836 - 2621.440: 97.9891% ( 43) 00:35:07.744 2621.440 - 2637.044: 98.0583% ( 55) 00:35:07.744 2637.044 - 2652.648: 98.1173% ( 47) 00:35:07.744 2652.648 - 2668.251: 98.1764% ( 47) 00:35:07.744 2668.251 - 2683.855: 98.2418% ( 52) 00:35:07.744 2683.855 - 2699.459: 98.2958% ( 43) 00:35:07.744 2699.459 - 2715.063: 98.3524% ( 45) 00:35:07.744 2715.063 - 2730.667: 98.4089% ( 45) 00:35:07.744 2730.667 - 2746.270: 98.4592% ( 40) 00:35:07.744 2746.270 - 2761.874: 98.5007% ( 33) 00:35:07.744 2761.874 - 2777.478: 98.5459% ( 36) 00:35:07.744 2777.478 - 2793.082: 98.5874% ( 33) 00:35:07.744 2793.082 - 2808.686: 98.6263% ( 31) 00:35:07.744 2808.686 - 2824.290: 98.6615% ( 28) 00:35:07.744 2824.290 - 2839.893: 98.6929% ( 25) 00:35:07.744 2839.893 - 2855.497: 98.7181% ( 20) 00:35:07.744 2855.497 - 2871.101: 98.7470% ( 23) 00:35:07.744 2871.101 - 2886.705: 98.7683% ( 17) 00:35:07.744 2886.705 - 2902.309: 98.7998% ( 25) 00:35:07.744 2902.309 - 2917.912: 98.8174% ( 14) 00:35:07.744 2917.912 - 2933.516: 98.8375% ( 16) 00:35:07.744 2933.516 - 2949.120: 98.8551% ( 14) 00:35:07.744 2949.120 - 2964.724: 98.8727% ( 14) 00:35:07.744 2964.724 - 2980.328: 98.8940% ( 17) 00:35:07.744 2980.328 - 2995.931: 98.9129% ( 15) 00:35:07.744 2995.931 - 3011.535: 98.9330% ( 16) 00:35:07.744 3011.535 - 3027.139: 98.9493% ( 13) 00:35:07.744 3027.139 - 3042.743: 98.9694% ( 16) 00:35:07.744 3042.743 - 3058.347: 98.9820% ( 10) 00:35:07.744 3058.347 - 3073.950: 98.9946% ( 10) 00:35:07.744 3073.950 - 3089.554: 99.0071% ( 10) 00:35:07.744 3089.554 - 3105.158: 99.0147% ( 6) 00:35:07.744 3105.158 - 3120.762: 99.0272% ( 10) 00:35:07.744 3120.762 - 3136.366: 99.0360% ( 7) 00:35:07.744 3136.366 - 3151.970: 99.0461% ( 8) 00:35:07.744 3151.970 - 3167.573: 99.0574% ( 9) 00:35:07.744 3167.573 - 3183.177: 99.0650% ( 6) 00:35:07.744 3183.177 - 3198.781: 99.0737% ( 7) 00:35:07.744 3198.781 - 3214.385: 99.0838% ( 8) 00:35:07.744 3214.385 - 3229.989: 99.0976% ( 11) 00:35:07.744 3229.989 - 3245.592: 99.1077% ( 8) 00:35:07.744 3245.592 - 3261.196: 99.1215% ( 11) 00:35:07.744 3261.196 - 3276.800: 99.1290% ( 6) 00:35:07.744 3276.800 - 3292.404: 99.1366% ( 6) 00:35:07.744 3292.404 - 3308.008: 99.1416% ( 4) 00:35:07.744 3308.008 - 3323.611: 99.1517% ( 8) 00:35:07.744 3323.611 - 3339.215: 99.1630% ( 9) 00:35:07.744 3339.215 - 3354.819: 99.1705% ( 6) 00:35:07.744 3354.819 - 3370.423: 99.1768% ( 5) 00:35:07.744 3370.423 - 3386.027: 99.1869% ( 8) 00:35:07.744 3386.027 - 3401.630: 99.1931% ( 5) 00:35:07.744 3401.630 - 3417.234: 99.2032% ( 8) 00:35:07.744 3417.234 - 3432.838: 99.2107% ( 6) 00:35:07.744 3432.838 - 3448.442: 99.2208% ( 8) 00:35:07.744 3448.442 - 3464.046: 99.2283% ( 6) 00:35:07.744 3464.046 - 3479.650: 99.2359% ( 6) 00:35:07.744 3479.650 - 3495.253: 99.2472% ( 9) 00:35:07.744 3495.253 - 3510.857: 99.2585% ( 9) 00:35:07.744 3510.857 - 3526.461: 99.2698% ( 9) 00:35:07.744 3526.461 - 3542.065: 99.2773% ( 6) 00:35:07.744 3542.065 - 3557.669: 99.2836% ( 5) 00:35:07.744 3557.669 - 3573.272: 99.2912% ( 6) 00:35:07.744 3573.272 - 3588.876: 99.3012% ( 8) 00:35:07.744 3588.876 - 3604.480: 99.3075% ( 5) 00:35:07.744 3604.480 - 3620.084: 99.3138% ( 5) 00:35:07.744 3620.084 - 3635.688: 99.3201% ( 5) 00:35:07.744 3635.688 - 3651.291: 99.3276% ( 6) 00:35:07.744 3651.291 - 3666.895: 99.3326% ( 4) 00:35:07.744 3666.895 - 3682.499: 99.3364% ( 3) 00:35:07.744 3682.499 - 3698.103: 99.3440% ( 6) 00:35:07.744 3698.103 - 3713.707: 99.3528% ( 7) 00:35:07.744 3713.707 - 3729.310: 99.3590% ( 5) 00:35:07.744 3729.310 - 3744.914: 99.3666% ( 6) 00:35:07.744 3744.914 - 3760.518: 99.3754% ( 7) 00:35:07.744 3760.518 - 3776.122: 99.3879% ( 10) 00:35:07.744 3776.122 - 3791.726: 99.3967% ( 7) 00:35:07.744 3791.726 - 3807.330: 99.4030% ( 5) 00:35:07.744 3807.330 - 3822.933: 99.4169% ( 11) 00:35:07.744 3822.933 - 3838.537: 99.4244% ( 6) 00:35:07.745 3838.537 - 3854.141: 99.4344% ( 8) 00:35:07.745 3854.141 - 3869.745: 99.4483% ( 11) 00:35:07.745 3869.745 - 3885.349: 99.4558% ( 6) 00:35:07.745 3885.349 - 3900.952: 99.4671% ( 9) 00:35:07.745 3900.952 - 3916.556: 99.4772% ( 8) 00:35:07.745 3916.556 - 3932.160: 99.4910% ( 11) 00:35:07.745 3932.160 - 3947.764: 99.4948% ( 3) 00:35:07.745 3947.764 - 3963.368: 99.5011% ( 5) 00:35:07.745 3963.368 - 3978.971: 99.5111% ( 8) 00:35:07.745 3978.971 - 3994.575: 99.5187% ( 6) 00:35:07.745 3994.575 - 4025.783: 99.5425% ( 19) 00:35:07.745 4025.783 - 4056.990: 99.5614% ( 15) 00:35:07.745 4056.990 - 4088.198: 99.5765% ( 12) 00:35:07.745 4088.198 - 4119.406: 99.5928% ( 13) 00:35:07.745 4119.406 - 4150.613: 99.6066% ( 11) 00:35:07.745 4150.613 - 4181.821: 99.6230% ( 13) 00:35:07.745 4181.821 - 4213.029: 99.6443% ( 17) 00:35:07.745 4213.029 - 4244.236: 99.6670% ( 18) 00:35:07.745 4244.236 - 4275.444: 99.6833% ( 13) 00:35:07.745 4275.444 - 4306.651: 99.6996% ( 13) 00:35:07.745 4306.651 - 4337.859: 99.7135% ( 11) 00:35:07.745 4337.859 - 4369.067: 99.7273% ( 11) 00:35:07.745 4369.067 - 4400.274: 99.7348% ( 6) 00:35:07.745 4400.274 - 4431.482: 99.7486% ( 11) 00:35:07.745 4431.482 - 4462.690: 99.7587% ( 8) 00:35:07.745 4462.690 - 4493.897: 99.7688% ( 8) 00:35:07.745 4493.897 - 4525.105: 99.7826% ( 11) 00:35:07.745 4525.105 - 4556.312: 99.7939% ( 9) 00:35:07.745 4556.312 - 4587.520: 99.8052% ( 9) 00:35:07.745 4587.520 - 4618.728: 99.8153% ( 8) 00:35:07.745 4618.728 - 4649.935: 99.8228% ( 6) 00:35:07.745 4649.935 - 4681.143: 99.8291% ( 5) 00:35:07.745 4681.143 - 4712.350: 99.8379% ( 7) 00:35:07.745 4712.350 - 4743.558: 99.8454% ( 6) 00:35:07.745 4743.558 - 4774.766: 99.8492% ( 3) 00:35:07.745 4774.766 - 4805.973: 99.8555% ( 5) 00:35:07.745 4805.973 - 4837.181: 99.8630% ( 6) 00:35:07.745 4837.181 - 4868.389: 99.8706% ( 6) 00:35:07.745 4868.389 - 4899.596: 99.8756% ( 4) 00:35:07.745 4899.596 - 4930.804: 99.8793% ( 3) 00:35:07.745 4930.804 - 4962.011: 99.8831% ( 3) 00:35:07.745 4962.011 - 4993.219: 99.8894% ( 5) 00:35:07.745 4993.219 - 5024.427: 99.8919% ( 2) 00:35:07.745 5024.427 - 5055.634: 99.8957% ( 3) 00:35:07.745 5055.634 - 5086.842: 99.8995% ( 3) 00:35:07.745 5086.842 - 5118.050: 99.9032% ( 3) 00:35:07.745 5118.050 - 5149.257: 99.9070% ( 3) 00:35:07.745 5149.257 - 5180.465: 99.9120% ( 4) 00:35:07.745 5180.465 - 5211.672: 99.9171% ( 4) 00:35:07.745 5211.672 - 5242.880: 99.9208% ( 3) 00:35:07.745 5242.880 - 5274.088: 99.9246% ( 3) 00:35:07.745 5274.088 - 5305.295: 99.9296% ( 4) 00:35:07.745 5305.295 - 5336.503: 99.9334% ( 3) 00:35:07.745 5336.503 - 5367.710: 99.9372% ( 3) 00:35:07.745 5367.710 - 5398.918: 99.9409% ( 3) 00:35:07.745 5398.918 - 5430.126: 99.9434% ( 2) 00:35:07.745 5430.126 - 5461.333: 99.9460% ( 2) 00:35:07.745 5461.333 - 5492.541: 99.9485% ( 2) 00:35:07.745 5492.541 - 5523.749: 99.9522% ( 3) 00:35:07.745 5523.749 - 5554.956: 99.9548% ( 2) 00:35:07.745 5554.956 - 5586.164: 99.9648% ( 8) 00:35:07.745 5586.164 - 5617.371: 99.9661% ( 1) 00:35:07.745 5617.371 - 5648.579: 99.9749% ( 7) 00:35:07.745 5648.579 - 5679.787: 99.9774% ( 2) 00:35:07.745 5679.787 - 5710.994: 99.9786% ( 1) 00:35:07.745 5710.994 - 5742.202: 99.9811% ( 2) 00:35:07.745 5742.202 - 5773.410: 99.9824% ( 1) 00:35:07.745 5773.410 - 5804.617: 99.9837% ( 1) 00:35:07.745 5804.617 - 5835.825: 99.9862% ( 2) 00:35:07.745 5835.825 - 5867.032: 99.9874% ( 1) 00:35:07.745 5867.032 - 5898.240: 99.9899% ( 2) 00:35:07.745 5898.240 - 5929.448: 99.9925% ( 2) 00:35:07.745 5929.448 - 5960.655: 99.9950% ( 2) 00:35:07.745 5960.655 - 5991.863: 99.9962% ( 1) 00:35:07.745 5991.863 - 6023.070: 99.9975% ( 1) 00:35:07.745 6054.278 - 6085.486: 99.9987% ( 1) 00:35:07.745 7302.583 - 7333.790: 100.0000% ( 1) 00:35:07.745 00:35:07.745 ************************************ 00:35:07.745 END TEST nvme_perf 00:35:07.745 ************************************ 00:35:07.745 15:29:02 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:35:07.745 00:35:07.745 real 0m2.648s 00:35:07.745 user 0m2.209s 00:35:07.745 sys 0m0.343s 00:35:07.745 15:29:02 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:07.745 15:29:02 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:35:07.745 15:29:03 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:07.745 15:29:03 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:35:07.745 15:29:03 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:35:07.745 15:29:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:07.745 15:29:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:07.745 ************************************ 00:35:07.745 START TEST nvme_hello_world 00:35:07.745 ************************************ 00:35:07.745 15:29:03 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:35:08.004 Initializing NVMe Controllers 00:35:08.004 Attached to 0000:00:10.0 00:35:08.004 Namespace ID: 1 size: 5GB 00:35:08.004 Initialization complete. 00:35:08.004 INFO: using host memory buffer for IO 00:35:08.004 Hello world! 00:35:08.004 00:35:08.004 real 0m0.318s 00:35:08.004 user 0m0.121s 00:35:08.004 sys 0m0.151s 00:35:08.004 15:29:03 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:08.004 ************************************ 00:35:08.004 END TEST nvme_hello_world 00:35:08.004 ************************************ 00:35:08.004 15:29:03 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:35:08.004 15:29:03 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:08.004 15:29:03 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:35:08.004 15:29:03 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:08.004 15:29:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:08.004 15:29:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:08.004 ************************************ 00:35:08.004 START TEST nvme_sgl 00:35:08.004 ************************************ 00:35:08.004 15:29:03 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:35:08.261 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:35:08.261 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:35:08.261 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:35:08.261 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:35:08.261 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:35:08.261 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:35:08.519 NVMe Readv/Writev Request test 00:35:08.519 Attached to 0000:00:10.0 00:35:08.519 0000:00:10.0: build_io_request_2 test passed 00:35:08.519 0000:00:10.0: build_io_request_4 test passed 00:35:08.519 0000:00:10.0: build_io_request_5 test passed 00:35:08.519 0000:00:10.0: build_io_request_6 test passed 00:35:08.519 0000:00:10.0: build_io_request_7 test passed 00:35:08.519 0000:00:10.0: build_io_request_10 test passed 00:35:08.519 Cleaning up... 00:35:08.519 ************************************ 00:35:08.519 END TEST nvme_sgl 00:35:08.519 ************************************ 00:35:08.519 00:35:08.519 real 0m0.355s 00:35:08.519 user 0m0.136s 00:35:08.519 sys 0m0.171s 00:35:08.519 15:29:03 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:08.519 15:29:03 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:35:08.519 15:29:03 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:08.519 15:29:03 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:35:08.519 15:29:03 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:08.519 15:29:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:08.519 15:29:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:08.519 ************************************ 00:35:08.519 START TEST nvme_e2edp 00:35:08.519 ************************************ 00:35:08.519 15:29:03 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:35:08.776 NVMe Write/Read with End-to-End data protection test 00:35:08.776 Attached to 0000:00:10.0 00:35:08.776 Cleaning up... 00:35:08.776 00:35:08.776 real 0m0.303s 00:35:08.776 user 0m0.086s 00:35:08.776 sys 0m0.175s 00:35:08.776 15:29:04 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:08.776 ************************************ 00:35:08.776 END TEST nvme_e2edp 00:35:08.776 ************************************ 00:35:08.776 15:29:04 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:35:08.776 15:29:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:08.776 15:29:04 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:35:08.776 15:29:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:08.776 15:29:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:08.776 15:29:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:08.776 ************************************ 00:35:08.776 START TEST nvme_reserve 00:35:08.776 ************************************ 00:35:08.776 15:29:04 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:35:09.341 ===================================================== 00:35:09.341 NVMe Controller at PCI bus 0, device 16, function 0 00:35:09.341 ===================================================== 00:35:09.341 Reservations: Not Supported 00:35:09.341 Reservation test passed 00:35:09.341 00:35:09.341 real 0m0.306s 00:35:09.341 user 0m0.094s 00:35:09.341 sys 0m0.169s 00:35:09.341 15:29:04 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:09.341 15:29:04 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:35:09.341 ************************************ 00:35:09.341 END TEST nvme_reserve 00:35:09.341 ************************************ 00:35:09.341 15:29:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:09.341 15:29:04 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:35:09.341 15:29:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:09.341 15:29:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:09.341 15:29:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:09.341 ************************************ 00:35:09.341 START TEST nvme_err_injection 00:35:09.341 ************************************ 00:35:09.341 15:29:04 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:35:09.598 NVMe Error Injection test 00:35:09.598 Attached to 0000:00:10.0 00:35:09.598 0000:00:10.0: get features failed as expected 00:35:09.598 0000:00:10.0: get features successfully as expected 00:35:09.598 0000:00:10.0: read failed as expected 00:35:09.598 0000:00:10.0: read successfully as expected 00:35:09.598 Cleaning up... 00:35:09.598 00:35:09.598 real 0m0.312s 00:35:09.598 user 0m0.092s 00:35:09.598 sys 0m0.178s 00:35:09.598 15:29:04 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:09.598 15:29:04 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:35:09.598 ************************************ 00:35:09.598 END TEST nvme_err_injection 00:35:09.598 ************************************ 00:35:09.598 15:29:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:09.598 15:29:04 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:35:09.598 15:29:04 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:35:09.598 15:29:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:09.598 15:29:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:09.598 ************************************ 00:35:09.598 START TEST nvme_overhead 00:35:09.598 ************************************ 00:35:09.598 15:29:04 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:35:10.976 Initializing NVMe Controllers 00:35:10.976 Attached to 0000:00:10.0 00:35:10.976 Initialization complete. Launching workers. 00:35:10.976 submit (in ns) avg, min, max = 14850.6, 12234.3, 66093.3 00:35:10.976 complete (in ns) avg, min, max = 9834.2, 8029.5, 31722.9 00:35:10.976 00:35:10.976 Submit histogram 00:35:10.976 ================ 00:35:10.976 Range in us Cumulative Count 00:35:10.976 12.190 - 12.251: 0.0133% ( 1) 00:35:10.976 12.251 - 12.312: 0.0531% ( 3) 00:35:10.976 12.312 - 12.373: 0.1327% ( 6) 00:35:10.976 12.373 - 12.434: 0.5572% ( 32) 00:35:10.976 12.434 - 12.495: 1.9369% ( 104) 00:35:10.976 12.495 - 12.556: 5.0411% ( 234) 00:35:10.976 12.556 - 12.617: 9.4985% ( 336) 00:35:10.976 12.617 - 12.678: 14.3539% ( 366) 00:35:10.976 12.678 - 12.739: 19.6073% ( 396) 00:35:10.976 12.739 - 12.800: 24.4760% ( 367) 00:35:10.976 12.800 - 12.861: 29.4906% ( 378) 00:35:10.976 12.861 - 12.922: 35.8716% ( 481) 00:35:10.976 12.922 - 12.983: 42.9955% ( 537) 00:35:10.976 12.983 - 13.044: 48.9387% ( 448) 00:35:10.976 13.044 - 13.105: 53.6747% ( 357) 00:35:10.976 13.105 - 13.166: 56.8586% ( 240) 00:35:10.976 13.166 - 13.227: 59.1934% ( 176) 00:35:10.976 13.227 - 13.288: 60.9976% ( 136) 00:35:10.976 13.288 - 13.349: 62.1385% ( 86) 00:35:10.976 13.349 - 13.410: 63.0671% ( 70) 00:35:10.976 13.410 - 13.470: 63.7570% ( 52) 00:35:10.976 13.470 - 13.531: 64.5131% ( 57) 00:35:10.976 13.531 - 13.592: 65.5877% ( 81) 00:35:10.976 13.592 - 13.653: 66.4102% ( 62) 00:35:10.976 13.653 - 13.714: 67.2592% ( 64) 00:35:10.976 13.714 - 13.775: 67.9623% ( 53) 00:35:10.976 13.775 - 13.836: 68.5460% ( 44) 00:35:10.976 13.836 - 13.897: 69.3818% ( 63) 00:35:10.976 13.897 - 13.958: 70.3635% ( 74) 00:35:10.976 13.958 - 14.019: 70.9737% ( 46) 00:35:10.976 14.019 - 14.080: 71.6768% ( 53) 00:35:10.976 14.080 - 14.141: 72.0748% ( 30) 00:35:10.976 14.141 - 14.202: 72.3799% ( 23) 00:35:10.976 14.202 - 14.263: 72.4595% ( 6) 00:35:10.976 14.263 - 14.324: 72.7249% ( 20) 00:35:10.976 14.324 - 14.385: 72.8841% ( 12) 00:35:10.976 14.385 - 14.446: 72.9637% ( 6) 00:35:10.976 14.446 - 14.507: 73.0034% ( 3) 00:35:10.976 14.507 - 14.568: 73.0565% ( 4) 00:35:10.976 14.568 - 14.629: 73.0963% ( 3) 00:35:10.976 14.629 - 14.690: 73.1892% ( 7) 00:35:10.976 14.690 - 14.750: 73.2422% ( 4) 00:35:10.976 14.750 - 14.811: 73.2953% ( 4) 00:35:10.976 14.811 - 14.872: 73.3351% ( 3) 00:35:10.976 14.872 - 14.933: 73.3749% ( 3) 00:35:10.976 14.933 - 14.994: 73.4147% ( 3) 00:35:10.976 14.994 - 15.055: 73.4412% ( 2) 00:35:10.976 15.421 - 15.482: 73.4545% ( 1) 00:35:10.976 15.482 - 15.543: 73.4678% ( 1) 00:35:10.976 15.726 - 15.848: 73.4810% ( 1) 00:35:10.976 15.848 - 15.970: 73.5208% ( 3) 00:35:10.976 16.091 - 16.213: 73.5341% ( 1) 00:35:10.976 16.335 - 16.457: 73.5474% ( 1) 00:35:10.976 16.823 - 16.945: 73.6004% ( 4) 00:35:10.976 16.945 - 17.067: 73.7862% ( 14) 00:35:10.976 17.067 - 17.189: 73.9453% ( 12) 00:35:10.976 17.189 - 17.310: 74.0515% ( 8) 00:35:10.976 17.310 - 17.432: 74.2770% ( 17) 00:35:10.976 17.432 - 17.554: 74.4362% ( 12) 00:35:10.976 17.554 - 17.676: 74.6617% ( 17) 00:35:10.976 17.676 - 17.798: 74.8209% ( 12) 00:35:10.976 17.798 - 17.920: 74.9403% ( 9) 00:35:10.976 17.920 - 18.042: 75.1924% ( 19) 00:35:10.976 18.042 - 18.164: 75.3118% ( 9) 00:35:10.976 18.164 - 18.286: 75.3914% ( 6) 00:35:10.976 18.286 - 18.408: 75.5373% ( 11) 00:35:10.976 18.408 - 18.530: 75.7495% ( 16) 00:35:10.976 18.530 - 18.651: 76.3067% ( 42) 00:35:10.976 18.651 - 18.773: 77.3813% ( 81) 00:35:10.976 18.773 - 18.895: 78.3895% ( 76) 00:35:10.976 18.895 - 19.017: 79.2518% ( 65) 00:35:10.976 19.017 - 19.139: 79.9549% ( 53) 00:35:10.976 19.139 - 19.261: 80.9631% ( 76) 00:35:10.976 19.261 - 19.383: 83.4969% ( 191) 00:35:10.976 19.383 - 19.505: 88.4717% ( 375) 00:35:10.976 19.505 - 19.627: 92.0536% ( 270) 00:35:10.976 19.627 - 19.749: 94.3221% ( 171) 00:35:10.976 19.749 - 19.870: 95.7416% ( 107) 00:35:10.976 19.870 - 19.992: 96.4049% ( 50) 00:35:10.976 19.992 - 20.114: 96.8559% ( 34) 00:35:10.976 20.114 - 20.236: 97.1080% ( 19) 00:35:10.976 20.236 - 20.358: 97.2406% ( 10) 00:35:10.976 20.358 - 20.480: 97.3202% ( 6) 00:35:10.976 20.480 - 20.602: 97.4794% ( 12) 00:35:10.976 20.602 - 20.724: 97.5590% ( 6) 00:35:10.976 20.724 - 20.846: 97.6121% ( 4) 00:35:10.976 20.846 - 20.968: 97.6386% ( 2) 00:35:10.976 20.968 - 21.090: 97.6519% ( 1) 00:35:10.976 21.090 - 21.211: 97.6784% ( 2) 00:35:10.976 21.211 - 21.333: 97.7182% ( 3) 00:35:10.976 21.333 - 21.455: 97.7315% ( 1) 00:35:10.976 21.455 - 21.577: 97.7448% ( 1) 00:35:10.976 21.577 - 21.699: 97.7580% ( 1) 00:35:10.976 21.821 - 21.943: 97.7846% ( 2) 00:35:10.976 22.065 - 22.187: 97.7978% ( 1) 00:35:10.976 22.187 - 22.309: 97.8244% ( 2) 00:35:10.976 22.309 - 22.430: 97.8376% ( 1) 00:35:10.976 22.430 - 22.552: 97.8509% ( 1) 00:35:10.976 22.552 - 22.674: 97.8907% ( 3) 00:35:10.976 22.796 - 22.918: 97.9040% ( 1) 00:35:10.976 23.040 - 23.162: 97.9305% ( 2) 00:35:10.976 23.528 - 23.650: 97.9570% ( 2) 00:35:10.976 23.650 - 23.771: 97.9703% ( 1) 00:35:10.976 23.771 - 23.893: 98.0101% ( 3) 00:35:10.976 24.015 - 24.137: 98.0233% ( 1) 00:35:10.976 24.137 - 24.259: 98.0499% ( 2) 00:35:10.976 24.259 - 24.381: 98.0764% ( 2) 00:35:10.976 24.381 - 24.503: 98.1295% ( 4) 00:35:10.976 24.503 - 24.625: 98.1560% ( 2) 00:35:10.976 24.625 - 24.747: 98.1825% ( 2) 00:35:10.976 24.747 - 24.869: 98.2091% ( 2) 00:35:10.976 24.869 - 24.990: 98.2621% ( 4) 00:35:10.976 24.990 - 25.112: 98.3152% ( 4) 00:35:10.976 25.112 - 25.234: 98.4081% ( 7) 00:35:10.976 25.234 - 25.356: 98.4877% ( 6) 00:35:10.976 25.356 - 25.478: 98.5275% ( 3) 00:35:10.976 25.478 - 25.600: 98.5673% ( 3) 00:35:10.976 25.600 - 25.722: 98.7265% ( 12) 00:35:10.976 25.722 - 25.844: 98.7928% ( 5) 00:35:10.976 25.844 - 25.966: 98.8724% ( 6) 00:35:10.976 25.966 - 26.088: 98.9652% ( 7) 00:35:10.976 26.088 - 26.210: 98.9918% ( 2) 00:35:10.976 26.210 - 26.331: 99.0183% ( 2) 00:35:10.976 26.453 - 26.575: 99.0316% ( 1) 00:35:10.976 26.697 - 26.819: 99.0448% ( 1) 00:35:10.976 27.185 - 27.307: 99.0979% ( 4) 00:35:10.976 27.429 - 27.550: 99.1244% ( 2) 00:35:10.976 27.672 - 27.794: 99.1775% ( 4) 00:35:10.976 27.794 - 27.916: 99.2040% ( 2) 00:35:10.976 27.916 - 28.038: 99.2306% ( 2) 00:35:10.976 28.038 - 28.160: 99.2836% ( 4) 00:35:10.976 28.160 - 28.282: 99.3367% ( 4) 00:35:10.976 28.282 - 28.404: 99.3500% ( 1) 00:35:10.976 28.526 - 28.648: 99.3765% ( 2) 00:35:10.976 28.648 - 28.770: 99.4030% ( 2) 00:35:10.976 28.770 - 28.891: 99.4163% ( 1) 00:35:10.976 28.891 - 29.013: 99.4826% ( 5) 00:35:10.976 29.013 - 29.135: 99.5092% ( 2) 00:35:10.976 29.135 - 29.257: 99.5224% ( 1) 00:35:10.976 29.257 - 29.379: 99.5357% ( 1) 00:35:10.976 29.379 - 29.501: 99.5622% ( 2) 00:35:10.976 29.501 - 29.623: 99.5755% ( 1) 00:35:10.976 29.623 - 29.745: 99.5888% ( 1) 00:35:10.977 29.745 - 29.867: 99.6020% ( 1) 00:35:10.977 29.867 - 29.989: 99.6285% ( 2) 00:35:10.977 29.989 - 30.110: 99.6551% ( 2) 00:35:10.977 30.110 - 30.232: 99.6683% ( 1) 00:35:10.977 30.354 - 30.476: 99.7214% ( 4) 00:35:10.977 30.476 - 30.598: 99.7347% ( 1) 00:35:10.977 30.720 - 30.842: 99.7612% ( 2) 00:35:10.977 30.842 - 30.964: 99.7745% ( 1) 00:35:10.977 31.086 - 31.208: 99.7877% ( 1) 00:35:10.977 31.208 - 31.451: 99.8143% ( 2) 00:35:10.977 31.695 - 31.939: 99.8275% ( 1) 00:35:10.977 31.939 - 32.183: 99.8408% ( 1) 00:35:10.977 32.427 - 32.670: 99.8541% ( 1) 00:35:10.977 32.914 - 33.158: 99.8673% ( 1) 00:35:10.977 33.158 - 33.402: 99.8806% ( 1) 00:35:10.977 34.377 - 34.621: 99.9071% ( 2) 00:35:10.977 34.621 - 34.865: 99.9337% ( 2) 00:35:10.977 35.109 - 35.352: 99.9602% ( 2) 00:35:10.977 39.497 - 39.741: 99.9735% ( 1) 00:35:10.977 41.691 - 41.935: 99.9867% ( 1) 00:35:10.977 65.829 - 66.316: 100.0000% ( 1) 00:35:10.977 00:35:10.977 Complete histogram 00:35:10.977 ================== 00:35:10.977 Range in us Cumulative Count 00:35:10.977 7.985 - 8.046: 0.0398% ( 3) 00:35:10.977 8.046 - 8.107: 1.5521% ( 114) 00:35:10.977 8.107 - 8.168: 9.5516% ( 603) 00:35:10.977 8.168 - 8.229: 18.2144% ( 653) 00:35:10.977 8.229 - 8.290: 24.3964% ( 466) 00:35:10.977 8.290 - 8.350: 28.2701% ( 292) 00:35:10.977 8.350 - 8.411: 30.4988% ( 168) 00:35:10.977 8.411 - 8.472: 31.3876% ( 67) 00:35:10.977 8.472 - 8.533: 31.6530% ( 20) 00:35:10.977 8.533 - 8.594: 31.8652% ( 16) 00:35:10.977 8.594 - 8.655: 33.7357% ( 141) 00:35:10.977 8.655 - 8.716: 39.0024% ( 397) 00:35:10.977 8.716 - 8.777: 46.3386% ( 553) 00:35:10.977 8.777 - 8.838: 53.4227% ( 534) 00:35:10.977 8.838 - 8.899: 59.4587% ( 455) 00:35:10.977 8.899 - 8.960: 63.8233% ( 329) 00:35:10.977 8.960 - 9.021: 66.1051% ( 172) 00:35:10.977 9.021 - 9.082: 67.5643% ( 110) 00:35:10.977 9.082 - 9.143: 68.3603% ( 60) 00:35:10.977 9.143 - 9.204: 68.8379% ( 36) 00:35:10.977 9.204 - 9.265: 69.1828% ( 26) 00:35:10.977 9.265 - 9.326: 69.4216% ( 18) 00:35:10.977 9.326 - 9.387: 69.9655% ( 41) 00:35:10.977 9.387 - 9.448: 70.8941% ( 70) 00:35:10.977 9.448 - 9.509: 71.7564% ( 65) 00:35:10.977 9.509 - 9.570: 72.4993% ( 56) 00:35:10.977 9.570 - 9.630: 73.5341% ( 78) 00:35:10.977 9.630 - 9.691: 74.1841% ( 49) 00:35:10.977 9.691 - 9.752: 74.6750% ( 37) 00:35:10.977 9.752 - 9.813: 74.8740% ( 15) 00:35:10.977 9.813 - 9.874: 75.0464% ( 13) 00:35:10.977 9.874 - 9.935: 75.1658% ( 9) 00:35:10.977 9.935 - 9.996: 75.2189% ( 4) 00:35:10.977 9.996 - 10.057: 75.2985% ( 6) 00:35:10.977 10.057 - 10.118: 75.3914% ( 7) 00:35:10.977 10.118 - 10.179: 75.4444% ( 4) 00:35:10.977 10.179 - 10.240: 75.5107% ( 5) 00:35:10.977 10.240 - 10.301: 75.6036% ( 7) 00:35:10.977 10.301 - 10.362: 75.6832% ( 6) 00:35:10.977 10.362 - 10.423: 75.6965% ( 1) 00:35:10.977 10.423 - 10.484: 75.8291% ( 10) 00:35:10.977 10.484 - 10.545: 75.8557% ( 2) 00:35:10.977 10.545 - 10.606: 75.8955% ( 3) 00:35:10.977 10.606 - 10.667: 75.9087% ( 1) 00:35:10.977 10.728 - 10.789: 75.9220% ( 1) 00:35:10.977 10.789 - 10.850: 75.9353% ( 1) 00:35:10.977 11.337 - 11.398: 75.9485% ( 1) 00:35:10.977 11.520 - 11.581: 75.9618% ( 1) 00:35:10.977 11.825 - 11.886: 75.9751% ( 1) 00:35:10.977 11.886 - 11.947: 75.9883% ( 1) 00:35:10.977 11.947 - 12.008: 76.0016% ( 1) 00:35:10.977 12.008 - 12.069: 76.0149% ( 1) 00:35:10.977 12.069 - 12.130: 76.0281% ( 1) 00:35:10.977 12.495 - 12.556: 76.0414% ( 1) 00:35:10.977 12.617 - 12.678: 76.0547% ( 1) 00:35:10.977 12.678 - 12.739: 76.2138% ( 12) 00:35:10.977 12.739 - 12.800: 77.3813% ( 88) 00:35:10.977 12.800 - 12.861: 79.6100% ( 168) 00:35:10.977 12.861 - 12.922: 82.4489% ( 214) 00:35:10.977 12.922 - 12.983: 85.7655% ( 250) 00:35:10.977 12.983 - 13.044: 88.1268% ( 178) 00:35:10.977 13.044 - 13.105: 89.4800% ( 102) 00:35:10.977 13.105 - 13.166: 90.4617% ( 74) 00:35:10.977 13.166 - 13.227: 91.5362% ( 81) 00:35:10.977 13.227 - 13.288: 92.5179% ( 74) 00:35:10.977 13.288 - 13.349: 93.7119% ( 90) 00:35:10.977 13.349 - 13.410: 94.7599% ( 79) 00:35:10.977 13.410 - 13.470: 95.4232% ( 50) 00:35:10.977 13.470 - 13.531: 95.9538% ( 40) 00:35:10.977 13.531 - 13.592: 96.5773% ( 47) 00:35:10.977 13.592 - 13.653: 96.9886% ( 31) 00:35:10.977 13.653 - 13.714: 97.1743% ( 14) 00:35:10.977 13.714 - 13.775: 97.2672% ( 7) 00:35:10.977 13.775 - 13.836: 97.4264% ( 12) 00:35:10.977 13.836 - 13.897: 97.5060% ( 6) 00:35:10.977 13.897 - 13.958: 97.5988% ( 7) 00:35:10.977 13.958 - 14.019: 97.7050% ( 8) 00:35:10.977 14.019 - 14.080: 97.7978% ( 7) 00:35:10.977 14.080 - 14.141: 97.8642% ( 5) 00:35:10.977 14.141 - 14.202: 97.9305% ( 5) 00:35:10.977 14.202 - 14.263: 97.9570% ( 2) 00:35:10.977 14.263 - 14.324: 97.9836% ( 2) 00:35:10.977 14.324 - 14.385: 98.0366% ( 4) 00:35:10.977 14.385 - 14.446: 98.1162% ( 6) 00:35:10.977 14.446 - 14.507: 98.1958% ( 6) 00:35:10.977 14.507 - 14.568: 98.2621% ( 5) 00:35:10.977 14.568 - 14.629: 98.3152% ( 4) 00:35:10.977 14.629 - 14.690: 98.3683% ( 4) 00:35:10.977 14.690 - 14.750: 98.4213% ( 4) 00:35:10.977 14.750 - 14.811: 98.4346% ( 1) 00:35:10.977 14.811 - 14.872: 98.4744% ( 3) 00:35:10.977 14.872 - 14.933: 98.5009% ( 2) 00:35:10.977 14.933 - 14.994: 98.5142% ( 1) 00:35:10.977 14.994 - 15.055: 98.5275% ( 1) 00:35:10.977 15.055 - 15.116: 98.5540% ( 2) 00:35:10.977 15.116 - 15.177: 98.6071% ( 4) 00:35:10.977 15.177 - 15.238: 98.6469% ( 3) 00:35:10.977 15.238 - 15.299: 98.6734% ( 2) 00:35:10.977 15.360 - 15.421: 98.7132% ( 3) 00:35:10.977 15.421 - 15.482: 98.7397% ( 2) 00:35:10.977 15.482 - 15.543: 98.7530% ( 1) 00:35:10.977 15.543 - 15.604: 98.7663% ( 1) 00:35:10.977 15.604 - 15.726: 98.7795% ( 1) 00:35:10.977 15.848 - 15.970: 98.8193% ( 3) 00:35:10.977 15.970 - 16.091: 98.8458% ( 2) 00:35:10.977 16.091 - 16.213: 98.8591% ( 1) 00:35:10.977 16.213 - 16.335: 98.8856% ( 2) 00:35:10.977 16.823 - 16.945: 98.8989% ( 1) 00:35:10.977 16.945 - 17.067: 98.9254% ( 2) 00:35:10.977 17.067 - 17.189: 98.9387% ( 1) 00:35:10.977 17.189 - 17.310: 98.9520% ( 1) 00:35:10.977 17.310 - 17.432: 98.9652% ( 1) 00:35:10.977 17.676 - 17.798: 98.9785% ( 1) 00:35:10.977 18.164 - 18.286: 98.9918% ( 1) 00:35:10.977 18.895 - 19.017: 99.0050% ( 1) 00:35:10.977 19.139 - 19.261: 99.0183% ( 1) 00:35:10.977 19.261 - 19.383: 99.0316% ( 1) 00:35:10.977 19.627 - 19.749: 99.0448% ( 1) 00:35:10.977 19.749 - 19.870: 99.0581% ( 1) 00:35:10.977 19.870 - 19.992: 99.0714% ( 1) 00:35:10.977 19.992 - 20.114: 99.0846% ( 1) 00:35:10.977 20.236 - 20.358: 99.0979% ( 1) 00:35:10.977 20.358 - 20.480: 99.1244% ( 2) 00:35:10.977 20.480 - 20.602: 99.1510% ( 2) 00:35:10.977 20.602 - 20.724: 99.1642% ( 1) 00:35:10.977 20.724 - 20.846: 99.1775% ( 1) 00:35:10.977 20.846 - 20.968: 99.1908% ( 1) 00:35:10.977 20.968 - 21.090: 99.2173% ( 2) 00:35:10.977 21.090 - 21.211: 99.2571% ( 3) 00:35:10.977 21.211 - 21.333: 99.2836% ( 2) 00:35:10.977 21.333 - 21.455: 99.3234% ( 3) 00:35:10.977 21.455 - 21.577: 99.4163% ( 7) 00:35:10.977 21.577 - 21.699: 99.4694% ( 4) 00:35:10.977 21.699 - 21.821: 99.5092% ( 3) 00:35:10.977 21.821 - 21.943: 99.5622% ( 4) 00:35:10.977 21.943 - 22.065: 99.6551% ( 7) 00:35:10.977 22.065 - 22.187: 99.6683% ( 1) 00:35:10.977 22.187 - 22.309: 99.6816% ( 1) 00:35:10.977 22.309 - 22.430: 99.7214% ( 3) 00:35:10.977 22.430 - 22.552: 99.7347% ( 1) 00:35:10.977 22.552 - 22.674: 99.7745% ( 3) 00:35:10.977 22.674 - 22.796: 99.8010% ( 2) 00:35:10.977 22.796 - 22.918: 99.8275% ( 2) 00:35:10.977 23.650 - 23.771: 99.8408% ( 1) 00:35:10.977 23.771 - 23.893: 99.8541% ( 1) 00:35:10.977 23.893 - 24.015: 99.8673% ( 1) 00:35:10.977 24.990 - 25.112: 99.8806% ( 1) 00:35:10.977 25.356 - 25.478: 99.8939% ( 1) 00:35:10.977 25.722 - 25.844: 99.9071% ( 1) 00:35:10.977 25.844 - 25.966: 99.9204% ( 1) 00:35:10.977 25.966 - 26.088: 99.9337% ( 1) 00:35:10.977 26.819 - 26.941: 99.9469% ( 1) 00:35:10.978 27.185 - 27.307: 99.9602% ( 1) 00:35:10.978 29.379 - 29.501: 99.9735% ( 1) 00:35:10.978 29.745 - 29.867: 99.9867% ( 1) 00:35:10.978 31.695 - 31.939: 100.0000% ( 1) 00:35:10.978 00:35:10.978 00:35:10.978 real 0m1.309s 00:35:10.978 user 0m1.102s 00:35:10.978 sys 0m0.158s 00:35:10.978 15:29:06 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:10.978 ************************************ 00:35:10.978 END TEST nvme_overhead 00:35:10.978 ************************************ 00:35:10.978 15:29:06 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:35:10.978 15:29:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:10.978 15:29:06 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:35:10.978 15:29:06 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:35:10.978 15:29:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:10.978 15:29:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:10.978 ************************************ 00:35:10.978 START TEST nvme_arbitration 00:35:10.978 ************************************ 00:35:10.978 15:29:06 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:35:14.349 Initializing NVMe Controllers 00:35:14.349 Attached to 0000:00:10.0 00:35:14.349 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:35:14.349 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:35:14.349 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:35:14.349 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:35:14.349 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:35:14.349 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:35:14.349 Initialization complete. Launching workers. 00:35:14.349 Starting thread on core 2 with urgent priority queue 00:35:14.349 Starting thread on core 1 with urgent priority queue 00:35:14.349 Starting thread on core 3 with urgent priority queue 00:35:14.350 Starting thread on core 0 with urgent priority queue 00:35:14.350 QEMU NVMe Ctrl (12340 ) core 0: 6527.67 IO/s 15.32 secs/100000 ios 00:35:14.350 QEMU NVMe Ctrl (12340 ) core 1: 6660.33 IO/s 15.01 secs/100000 ios 00:35:14.350 QEMU NVMe Ctrl (12340 ) core 2: 4514.00 IO/s 22.15 secs/100000 ios 00:35:14.350 QEMU NVMe Ctrl (12340 ) core 3: 4467.33 IO/s 22.38 secs/100000 ios 00:35:14.350 ======================================================== 00:35:14.350 00:35:14.350 00:35:14.350 real 0m3.337s 00:35:14.350 user 0m9.152s 00:35:14.350 sys 0m0.181s 00:35:14.350 15:29:09 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:14.350 15:29:09 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:35:14.350 ************************************ 00:35:14.350 END TEST nvme_arbitration 00:35:14.350 ************************************ 00:35:14.350 15:29:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:14.350 15:29:09 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:35:14.350 15:29:09 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:35:14.350 15:29:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.350 15:29:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:14.350 ************************************ 00:35:14.350 START TEST nvme_single_aen 00:35:14.350 ************************************ 00:35:14.350 15:29:09 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:35:14.608 Asynchronous Event Request test 00:35:14.608 Attached to 0000:00:10.0 00:35:14.608 Reset controller to setup AER completions for this process 00:35:14.608 Registering asynchronous event callbacks... 00:35:14.608 Getting orig temperature thresholds of all controllers 00:35:14.608 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:35:14.608 Setting all controllers temperature threshold low to trigger AER 00:35:14.608 Waiting for all controllers temperature threshold to be set lower 00:35:14.608 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:35:14.608 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:35:14.608 Waiting for all controllers to trigger AER and reset threshold 00:35:14.608 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:35:14.608 Cleaning up... 00:35:14.608 ************************************ 00:35:14.608 END TEST nvme_single_aen 00:35:14.608 ************************************ 00:35:14.608 00:35:14.608 real 0m0.282s 00:35:14.608 user 0m0.083s 00:35:14.608 sys 0m0.144s 00:35:14.608 15:29:09 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:14.608 15:29:09 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:35:14.608 15:29:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:14.608 15:29:09 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:35:14.608 15:29:09 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:14.608 15:29:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.608 15:29:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:14.608 ************************************ 00:35:14.608 START TEST nvme_doorbell_aers 00:35:14.608 ************************************ 00:35:14.608 15:29:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:35:14.608 15:29:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:35:14.608 15:29:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:35:14.608 15:29:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:35:14.609 15:29:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:35:14.609 15:29:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:14.609 15:29:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:35:14.609 15:29:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:14.609 15:29:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:14.609 15:29:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:14.867 15:29:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:14.867 15:29:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:35:14.867 15:29:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:35:14.867 15:29:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:35:14.867 [2024-07-23 15:29:10.293361] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 129884) is not found. Dropping the request. 00:35:24.855 Executing: test_write_invalid_db 00:35:24.855 Waiting for AER completion... 00:35:24.855 Failure: test_write_invalid_db 00:35:24.855 00:35:24.855 Executing: test_invalid_db_write_overflow_sq 00:35:24.855 Waiting for AER completion... 00:35:24.855 Failure: test_invalid_db_write_overflow_sq 00:35:24.855 00:35:24.855 Executing: test_invalid_db_write_overflow_cq 00:35:24.855 Waiting for AER completion... 00:35:24.855 Failure: test_invalid_db_write_overflow_cq 00:35:24.855 00:35:24.855 00:35:24.855 real 0m10.108s 00:35:24.855 user 0m7.454s 00:35:24.855 sys 0m2.601s 00:35:24.855 15:29:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:24.855 15:29:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:35:24.855 ************************************ 00:35:24.855 END TEST nvme_doorbell_aers 00:35:24.855 ************************************ 00:35:24.855 15:29:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:24.855 15:29:20 nvme -- nvme/nvme.sh@97 -- # uname 00:35:24.855 15:29:20 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:35:24.855 15:29:20 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:35:24.855 15:29:20 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:35:24.855 15:29:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:24.855 15:29:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:24.855 ************************************ 00:35:24.855 START TEST nvme_multi_aen 00:35:24.855 ************************************ 00:35:24.855 15:29:20 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:35:25.115 [2024-07-23 15:29:20.416508] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 129884) is not found. Dropping the request. 00:35:25.115 [2024-07-23 15:29:20.416610] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 129884) is not found. Dropping the request. 00:35:25.115 [2024-07-23 15:29:20.416634] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 129884) is not found. Dropping the request. 00:35:25.115 Child process pid: 130055 00:35:25.374 [Child] Asynchronous Event Request test 00:35:25.374 [Child] Attached to 0000:00:10.0 00:35:25.374 [Child] Registering asynchronous event callbacks... 00:35:25.374 [Child] Getting orig temperature thresholds of all controllers 00:35:25.374 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:35:25.374 [Child] Waiting for all controllers to trigger AER and reset threshold 00:35:25.374 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:35:25.374 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:35:25.374 [Child] Cleaning up... 00:35:25.374 Asynchronous Event Request test 00:35:25.374 Attached to 0000:00:10.0 00:35:25.374 Reset controller to setup AER completions for this process 00:35:25.374 Registering asynchronous event callbacks... 00:35:25.374 Getting orig temperature thresholds of all controllers 00:35:25.374 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:35:25.374 Setting all controllers temperature threshold low to trigger AER 00:35:25.374 Waiting for all controllers temperature threshold to be set lower 00:35:25.374 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:35:25.374 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:35:25.374 Waiting for all controllers to trigger AER and reset threshold 00:35:25.374 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:35:25.374 Cleaning up... 00:35:25.374 00:35:25.374 real 0m0.613s 00:35:25.374 user 0m0.188s 00:35:25.374 sys 0m0.320s 00:35:25.374 15:29:20 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:25.374 15:29:20 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:35:25.374 ************************************ 00:35:25.374 END TEST nvme_multi_aen 00:35:25.374 ************************************ 00:35:25.633 15:29:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:25.634 15:29:20 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:35:25.634 15:29:20 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:35:25.634 15:29:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:25.634 15:29:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:25.634 ************************************ 00:35:25.634 START TEST nvme_startup 00:35:25.634 ************************************ 00:35:25.634 15:29:20 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:35:25.634 Initializing NVMe Controllers 00:35:25.634 Attached to 0000:00:10.0 00:35:25.634 Initialization complete. 00:35:25.634 Time used:162162.188 (us). 00:35:25.634 ************************************ 00:35:25.634 END TEST nvme_startup 00:35:25.634 ************************************ 00:35:25.634 00:35:25.634 real 0m0.233s 00:35:25.634 user 0m0.072s 00:35:25.634 sys 0m0.123s 00:35:25.634 15:29:21 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:25.634 15:29:21 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:35:25.893 15:29:21 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:25.893 15:29:21 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:35:25.893 15:29:21 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:25.893 15:29:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:25.893 15:29:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:25.893 ************************************ 00:35:25.893 START TEST nvme_multi_secondary 00:35:25.893 ************************************ 00:35:25.893 15:29:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:35:25.893 15:29:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=130107 00:35:25.893 15:29:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:35:25.893 15:29:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=130108 00:35:25.893 15:29:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:35:25.893 15:29:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:35:29.198 Initializing NVMe Controllers 00:35:29.198 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:29.198 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:35:29.198 Initialization complete. Launching workers. 00:35:29.198 ======================================================== 00:35:29.198 Latency(us) 00:35:29.198 Device Information : IOPS MiB/s Average min max 00:35:29.198 PCIE (0000:00:10.0) NSID 1 from core 2: 15882.04 62.04 1006.80 172.05 8535.54 00:35:29.198 ======================================================== 00:35:29.199 Total : 15882.04 62.04 1006.80 172.05 8535.54 00:35:29.199 00:35:29.199 15:29:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 130107 00:35:29.457 Initializing NVMe Controllers 00:35:29.457 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:29.457 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:35:29.457 Initialization complete. Launching workers. 00:35:29.457 ======================================================== 00:35:29.457 Latency(us) 00:35:29.457 Device Information : IOPS MiB/s Average min max 00:35:29.457 PCIE (0000:00:10.0) NSID 1 from core 1: 34565.28 135.02 462.49 168.91 1399.60 00:35:29.457 ======================================================== 00:35:29.457 Total : 34565.28 135.02 462.49 168.91 1399.60 00:35:29.457 00:35:31.361 Initializing NVMe Controllers 00:35:31.361 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:31.361 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:35:31.361 Initialization complete. Launching workers. 00:35:31.361 ======================================================== 00:35:31.361 Latency(us) 00:35:31.361 Device Information : IOPS MiB/s Average min max 00:35:31.361 PCIE (0000:00:10.0) NSID 1 from core 0: 42782.97 167.12 373.63 149.40 1667.31 00:35:31.361 ======================================================== 00:35:31.361 Total : 42782.97 167.12 373.63 149.40 1667.31 00:35:31.361 00:35:31.361 15:29:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 130108 00:35:31.361 15:29:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=130172 00:35:31.361 15:29:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:35:31.361 15:29:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=130173 00:35:31.361 15:29:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:35:31.361 15:29:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:35:34.645 Initializing NVMe Controllers 00:35:34.645 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:34.645 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:35:34.645 Initialization complete. Launching workers. 00:35:34.645 ======================================================== 00:35:34.645 Latency(us) 00:35:34.645 Device Information : IOPS MiB/s Average min max 00:35:34.645 PCIE (0000:00:10.0) NSID 1 from core 0: 36730.65 143.48 435.28 160.76 1527.70 00:35:34.645 ======================================================== 00:35:34.645 Total : 36730.65 143.48 435.28 160.76 1527.70 00:35:34.645 00:35:34.903 Initializing NVMe Controllers 00:35:34.903 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:34.903 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:35:34.903 Initialization complete. Launching workers. 00:35:34.903 ======================================================== 00:35:34.903 Latency(us) 00:35:34.903 Device Information : IOPS MiB/s Average min max 00:35:34.903 PCIE (0000:00:10.0) NSID 1 from core 1: 35784.32 139.78 446.77 163.50 2362.87 00:35:34.903 ======================================================== 00:35:34.903 Total : 35784.32 139.78 446.77 163.50 2362.87 00:35:34.903 00:35:36.805 Initializing NVMe Controllers 00:35:36.805 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:36.805 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:35:36.805 Initialization complete. Launching workers. 00:35:36.805 ======================================================== 00:35:36.805 Latency(us) 00:35:36.805 Device Information : IOPS MiB/s Average min max 00:35:36.805 PCIE (0000:00:10.0) NSID 1 from core 2: 18346.47 71.67 871.52 172.06 8460.31 00:35:36.805 ======================================================== 00:35:36.805 Total : 18346.47 71.67 871.52 172.06 8460.31 00:35:36.805 00:35:36.805 ************************************ 00:35:36.805 END TEST nvme_multi_secondary 00:35:36.805 ************************************ 00:35:36.805 15:29:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 130172 00:35:36.805 15:29:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 130173 00:35:36.805 00:35:36.805 real 0m10.936s 00:35:36.805 user 0m18.489s 00:35:36.805 sys 0m1.073s 00:35:36.805 15:29:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:36.805 15:29:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:35:36.805 15:29:32 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:36.805 15:29:32 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:35:36.805 15:29:32 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:35:36.805 15:29:32 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/129510 ]] 00:35:36.805 15:29:32 nvme -- common/autotest_common.sh@1088 -- # kill 129510 00:35:36.805 15:29:32 nvme -- common/autotest_common.sh@1089 -- # wait 129510 00:35:36.805 [2024-07-23 15:29:32.116678] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 130054) is not found. Dropping the request. 00:35:36.805 [2024-07-23 15:29:32.116845] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 130054) is not found. Dropping the request. 00:35:36.805 [2024-07-23 15:29:32.116907] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 130054) is not found. Dropping the request. 00:35:36.805 [2024-07-23 15:29:32.116952] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 130054) is not found. Dropping the request. 00:35:37.063 15:29:32 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:35:37.063 15:29:32 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:35:37.063 15:29:32 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:35:37.063 15:29:32 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:37.063 15:29:32 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:37.063 15:29:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:37.063 ************************************ 00:35:37.063 START TEST bdev_nvme_reset_stuck_adm_cmd 00:35:37.063 ************************************ 00:35:37.063 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:35:37.063 * Looking for test storage... 00:35:37.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:35:37.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=130314 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 130314 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 130314 ']' 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:37.064 15:29:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:35:37.322 [2024-07-23 15:29:32.502475] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:35:37.322 [2024-07-23 15:29:32.502919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130314 ] 00:35:37.322 [2024-07-23 15:29:32.692219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:37.322 [2024-07-23 15:29:32.754213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.322 [2024-07-23 15:29:32.754386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.322 [2024-07-23 15:29:32.754472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.322 [2024-07-23 15:29:32.754557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:35:38.282 nvme0n1 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_EVXxf.txt 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:35:38.282 true 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721748573 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=130333 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:38.282 15:29:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:35:40.186 [2024-07-23 15:29:35.470786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:35:40.186 [2024-07-23 15:29:35.471196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:40.186 [2024-07-23 15:29:35.471237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:35:40.186 [2024-07-23 15:29:35.471254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.186 [2024-07-23 15:29:35.473138] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:40.186 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 130333 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 130333 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 130333 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_EVXxf.txt 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:35:40.186 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_EVXxf.txt 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 130314 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 130314 ']' 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 130314 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130314 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:40.187 killing process with pid 130314 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130314' 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 130314 00:35:40.187 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 130314 00:35:40.755 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:35:40.755 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:35:40.755 ************************************ 00:35:40.755 END TEST bdev_nvme_reset_stuck_adm_cmd 00:35:40.755 ************************************ 00:35:40.755 00:35:40.755 real 0m3.728s 00:35:40.755 user 0m13.077s 00:35:40.755 sys 0m0.666s 00:35:40.755 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:40.755 15:29:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 15:29:36 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:40.755 15:29:36 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:35:40.755 15:29:36 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:35:40.755 15:29:36 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:40.755 15:29:36 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:40.755 15:29:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:40.755 ************************************ 00:35:40.755 START TEST nvme_fio 00:35:40.755 ************************************ 00:35:40.755 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:35:40.755 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:35:40.755 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:35:40.755 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:35:40.755 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:40.755 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:35:40.755 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:40.755 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:40.755 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:40.755 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:40.755 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:35:40.755 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:35:40.755 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:35:40.755 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:35:40.755 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:35:40.755 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:35:41.014 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:35:41.014 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:35:41.273 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:35:41.273 15:29:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:35:41.273 15:29:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:35:41.532 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:41.533 fio-3.35 00:35:41.533 Starting 1 thread 00:35:44.817 00:35:44.817 test: (groupid=0, jobs=1): err= 0: pid=130452: Tue Jul 23 15:29:39 2024 00:35:44.817 read: IOPS=18.3k, BW=71.5MiB/s (75.0MB/s)(143MiB/2001msec) 00:35:44.817 slat (usec): min=4, max=797, avg= 5.85, stdev= 4.90 00:35:44.817 clat (usec): min=240, max=9107, avg=3481.16, stdev=461.72 00:35:44.817 lat (usec): min=245, max=9218, avg=3487.01, stdev=462.44 00:35:44.817 clat percentiles (usec): 00:35:44.817 | 1.00th=[ 2999], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3228], 00:35:44.817 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3359], 00:35:44.817 | 70.00th=[ 3458], 80.00th=[ 3884], 90.00th=[ 4047], 95.00th=[ 4146], 00:35:44.817 | 99.00th=[ 4621], 99.50th=[ 6128], 99.90th=[ 7635], 99.95th=[ 8029], 00:35:44.817 | 99.99th=[ 8979] 00:35:44.817 bw ( KiB/s): min=73488, max=77120, per=100.00%, avg=75552.00, stdev=1866.11, samples=3 00:35:44.817 iops : min=18372, max=19280, avg=18888.00, stdev=466.53, samples=3 00:35:44.817 write: IOPS=18.3k, BW=71.5MiB/s (74.9MB/s)(143MiB/2001msec); 0 zone resets 00:35:44.817 slat (usec): min=4, max=188, avg= 5.96, stdev= 2.18 00:35:44.817 clat (usec): min=290, max=8997, avg=3489.51, stdev=467.54 00:35:44.817 lat (usec): min=296, max=9024, avg=3495.46, stdev=468.16 00:35:44.817 clat percentiles (usec): 00:35:44.817 | 1.00th=[ 2999], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3228], 00:35:44.817 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3392], 00:35:44.817 | 70.00th=[ 3490], 80.00th=[ 3884], 90.00th=[ 4047], 95.00th=[ 4146], 00:35:44.817 | 99.00th=[ 4686], 99.50th=[ 6456], 99.90th=[ 7832], 99.95th=[ 8225], 00:35:44.817 | 99.99th=[ 8848] 00:35:44.817 bw ( KiB/s): min=73448, max=77264, per=100.00%, avg=75586.67, stdev=1949.38, samples=3 00:35:44.817 iops : min=18362, max=19316, avg=18896.67, stdev=487.35, samples=3 00:35:44.817 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:35:44.817 lat (msec) : 2=0.17%, 4=87.17%, 10=12.62% 00:35:44.817 cpu : usr=99.30%, sys=0.55%, ctx=38, majf=0, minf=626 00:35:44.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:35:44.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:44.817 issued rwts: total=36619,36614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:44.817 00:35:44.817 Run status group 0 (all jobs): 00:35:44.817 READ: bw=71.5MiB/s (75.0MB/s), 71.5MiB/s-71.5MiB/s (75.0MB/s-75.0MB/s), io=143MiB (150MB), run=2001-2001msec 00:35:44.817 WRITE: bw=71.5MiB/s (74.9MB/s), 71.5MiB/s-71.5MiB/s (74.9MB/s-74.9MB/s), io=143MiB (150MB), run=2001-2001msec 00:35:45.076 ----------------------------------------------------- 00:35:45.076 Suppressions used: 00:35:45.076 count bytes template 00:35:45.076 1 32 /usr/src/fio/parse.c 00:35:45.076 ----------------------------------------------------- 00:35:45.076 00:35:45.076 ************************************ 00:35:45.076 END TEST nvme_fio 00:35:45.076 ************************************ 00:35:45.076 15:29:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:35:45.076 15:29:40 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:35:45.076 00:35:45.076 real 0m4.302s 00:35:45.076 user 0m3.446s 00:35:45.076 sys 0m0.505s 00:35:45.076 15:29:40 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:45.076 15:29:40 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:35:45.076 15:29:40 nvme -- common/autotest_common.sh@1142 -- # return 0 00:35:45.076 00:35:45.076 real 0m44.629s 00:35:45.076 user 1m56.989s 00:35:45.076 sys 0m10.550s 00:35:45.076 ************************************ 00:35:45.076 END TEST nvme 00:35:45.076 ************************************ 00:35:45.076 15:29:40 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:45.076 15:29:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:35:45.076 15:29:40 -- common/autotest_common.sh@1142 -- # return 0 00:35:45.076 15:29:40 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:35:45.076 15:29:40 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:35:45.076 15:29:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:45.076 15:29:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:45.076 15:29:40 -- common/autotest_common.sh@10 -- # set +x 00:35:45.076 ************************************ 00:35:45.076 START TEST nvme_scc 00:35:45.076 ************************************ 00:35:45.076 15:29:40 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:35:45.335 * Looking for test storage... 00:35:45.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:45.335 15:29:40 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:45.335 15:29:40 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.335 15:29:40 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.335 15:29:40 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.335 15:29:40 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:45.335 15:29:40 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:45.335 15:29:40 nvme_scc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:45.335 15:29:40 nvme_scc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:45.335 15:29:40 nvme_scc -- paths/export.sh@6 -- # export PATH 00:35:45.335 15:29:40 nvme_scc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:35:45.335 15:29:40 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:35:45.335 15:29:40 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:45.335 15:29:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:35:45.335 15:29:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:35:45.335 15:29:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:35:45.335 15:29:40 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:45.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:35:45.594 Waiting for block devices as requested 00:35:45.594 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:45.855 15:29:41 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:35:45.855 15:29:41 nvme_scc -- scripts/common.sh@15 -- # local i 00:35:45.855 15:29:41 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:35:45.855 15:29:41 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:35:45.855 15:29:41 nvme_scc -- scripts/common.sh@24 -- # return 0 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.855 15:29:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:35:45.856 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.857 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.858 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:35:45.859 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:35:45.860 15:29:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:35:45.860 15:29:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:35:45.861 15:29:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:35:45.861 15:29:41 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:35:45.861 15:29:41 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:35:45.861 15:29:41 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:35:45.861 15:29:41 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:35:45.861 15:29:41 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:35:45.861 15:29:41 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:35:45.861 15:29:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:35:45.861 15:29:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:35:45.861 15:29:41 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:46.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:35:46.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:47.365 15:29:42 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:35:47.365 15:29:42 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:35:47.365 15:29:42 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:47.365 15:29:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:35:47.365 ************************************ 00:35:47.365 START TEST nvme_simple_copy 00:35:47.365 ************************************ 00:35:47.365 15:29:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:35:47.624 Initializing NVMe Controllers 00:35:47.624 Attaching to 0000:00:10.0 00:35:47.625 Controller supports SCC. Attached to 0000:00:10.0 00:35:47.625 Namespace ID: 1 size: 5GB 00:35:47.625 Initialization complete. 00:35:47.625 00:35:47.625 Controller QEMU NVMe Ctrl (12340 ) 00:35:47.625 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:35:47.625 Namespace Block Size:4096 00:35:47.625 Writing LBAs 0 to 63 with Random Data 00:35:47.625 Copied LBAs from 0 - 63 to the Destination LBA 256 00:35:47.625 LBAs matching Written Data: 64 00:35:47.625 ************************************ 00:35:47.625 END TEST nvme_simple_copy 00:35:47.625 ************************************ 00:35:47.625 00:35:47.625 real 0m0.288s 00:35:47.625 user 0m0.099s 00:35:47.625 sys 0m0.089s 00:35:47.625 15:29:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:47.625 15:29:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:35:47.625 15:29:43 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:35:47.625 00:35:47.625 real 0m2.591s 00:35:47.625 user 0m0.622s 00:35:47.625 sys 0m1.903s 00:35:47.625 15:29:43 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:47.625 15:29:43 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:35:47.625 ************************************ 00:35:47.625 END TEST nvme_scc 00:35:47.625 ************************************ 00:35:47.884 15:29:43 -- common/autotest_common.sh@1142 -- # return 0 00:35:47.884 15:29:43 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:35:47.884 15:29:43 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:35:47.884 15:29:43 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:35:47.884 15:29:43 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:35:47.884 15:29:43 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:35:47.884 15:29:43 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:35:47.884 15:29:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:47.884 15:29:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:47.884 15:29:43 -- common/autotest_common.sh@10 -- # set +x 00:35:47.884 ************************************ 00:35:47.884 START TEST nvme_rpc 00:35:47.884 ************************************ 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:35:47.884 * Looking for test storage... 00:35:47.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:47.884 15:29:43 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:47.884 15:29:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:35:47.884 15:29:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:35:47.884 15:29:43 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=130890 00:35:47.884 15:29:43 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:35:47.884 15:29:43 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:35:47.884 15:29:43 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 130890 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 130890 ']' 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:47.884 15:29:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:48.143 [2024-07-23 15:29:43.346290] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:35:48.143 [2024-07-23 15:29:43.346491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130890 ] 00:35:48.143 [2024-07-23 15:29:43.500816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:48.143 [2024-07-23 15:29:43.558085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.143 [2024-07-23 15:29:43.558201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.089 15:29:44 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:49.089 15:29:44 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:35:49.089 15:29:44 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:35:49.348 Nvme0n1 00:35:49.348 15:29:44 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:35:49.348 15:29:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:35:49.606 request: 00:35:49.606 { 00:35:49.606 "bdev_name": "Nvme0n1", 00:35:49.606 "filename": "non_existing_file", 00:35:49.606 "method": "bdev_nvme_apply_firmware", 00:35:49.606 "req_id": 1 00:35:49.606 } 00:35:49.606 Got JSON-RPC error response 00:35:49.606 response: 00:35:49.606 { 00:35:49.606 "code": -32603, 00:35:49.606 "message": "open file failed." 00:35:49.606 } 00:35:49.606 15:29:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:35:49.606 15:29:44 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:35:49.606 15:29:44 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:35:49.606 15:29:45 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:35:49.607 15:29:45 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 130890 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 130890 ']' 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 130890 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130890 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130890' 00:35:49.607 killing process with pid 130890 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@967 -- # kill 130890 00:35:49.607 15:29:45 nvme_rpc -- common/autotest_common.sh@972 -- # wait 130890 00:35:50.174 00:35:50.174 real 0m2.336s 00:35:50.174 user 0m4.561s 00:35:50.174 sys 0m0.638s 00:35:50.174 15:29:45 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:50.174 15:29:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:50.174 ************************************ 00:35:50.174 END TEST nvme_rpc 00:35:50.174 ************************************ 00:35:50.174 15:29:45 -- common/autotest_common.sh@1142 -- # return 0 00:35:50.174 15:29:45 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:35:50.174 15:29:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:50.174 15:29:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:50.174 15:29:45 -- common/autotest_common.sh@10 -- # set +x 00:35:50.174 ************************************ 00:35:50.174 START TEST nvme_rpc_timeouts 00:35:50.174 ************************************ 00:35:50.174 15:29:45 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:35:50.174 * Looking for test storage... 00:35:50.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:50.174 15:29:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:50.174 15:29:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_130943 00:35:50.174 15:29:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_130943 00:35:50.174 15:29:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=130967 00:35:50.174 15:29:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:35:50.174 15:29:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:35:50.174 15:29:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 130967 00:35:50.174 15:29:45 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 130967 ']' 00:35:50.174 15:29:45 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.174 15:29:45 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:50.174 15:29:45 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.174 15:29:45 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:50.174 15:29:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:35:50.433 [2024-07-23 15:29:45.680364] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:35:50.433 [2024-07-23 15:29:45.680568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130967 ] 00:35:50.433 [2024-07-23 15:29:45.833276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:50.692 [2024-07-23 15:29:45.880684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.692 [2024-07-23 15:29:45.880854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.259 15:29:46 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:51.259 15:29:46 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:35:51.259 15:29:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:35:51.259 Checking default timeout settings: 00:35:51.259 15:29:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:51.518 Making settings changes with rpc: 00:35:51.518 15:29:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:35:51.518 15:29:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:35:51.777 Check default vs. modified settings: 00:35:51.777 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:35:51.777 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_130943 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_130943 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:35:52.035 Setting action_on_timeout is changed as expected. 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_130943 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_130943 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:35:52.035 Setting timeout_us is changed as expected. 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_130943 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_130943 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:35:52.035 Setting timeout_admin_us is changed as expected. 00:35:52.035 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:35:52.036 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:35:52.036 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_130943 /tmp/settings_modified_130943 00:35:52.036 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 130967 00:35:52.036 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 130967 ']' 00:35:52.036 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 130967 00:35:52.036 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:35:52.036 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:52.036 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130967 00:35:52.295 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:52.295 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:52.295 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130967' 00:35:52.295 killing process with pid 130967 00:35:52.295 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 130967 00:35:52.295 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 130967 00:35:52.554 RPC TIMEOUT SETTING TEST PASSED. 00:35:52.554 15:29:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:35:52.554 00:35:52.554 real 0m2.394s 00:35:52.554 user 0m4.685s 00:35:52.554 sys 0m0.671s 00:35:52.554 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:52.554 15:29:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:35:52.554 ************************************ 00:35:52.554 END TEST nvme_rpc_timeouts 00:35:52.554 ************************************ 00:35:52.554 15:29:47 -- common/autotest_common.sh@1142 -- # return 0 00:35:52.554 15:29:47 -- spdk/autotest.sh@243 -- # uname -s 00:35:52.554 15:29:47 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:35:52.554 15:29:47 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:35:52.554 15:29:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:52.554 15:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:52.554 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:35:52.554 ************************************ 00:35:52.554 START TEST sw_hotplug 00:35:52.554 ************************************ 00:35:52.554 15:29:47 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:35:52.813 * Looking for test storage... 00:35:52.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:52.813 15:29:48 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:53.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:35:53.071 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:54.007 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:35:54.007 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:35:54.007 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:35:54.007 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@230 -- # local class 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@15 -- # local i 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:35:54.007 15:29:49 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:35:54.007 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:35:54.007 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:35:54.007 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:54.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:35:54.266 Waiting for block devices as requested 00:35:54.266 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:54.525 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:35:54.525 15:29:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:54.783 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:35:54.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:35:55.041 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:35:55.980 15:29:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=131467 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:35:55.980 15:29:51 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:35:55.980 15:29:51 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:35:55.980 15:29:51 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:35:55.980 15:29:51 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:35:55.980 15:29:51 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:35:55.980 15:29:51 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:35:56.242 Initializing NVMe Controllers 00:35:56.242 Attaching to 0000:00:10.0 00:35:56.242 Attached to 0000:00:10.0 00:35:56.242 Initialization complete. Starting I/O... 00:35:56.242 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:35:56.242 00:35:57.178 QEMU NVMe Ctrl (12340 ): 1939 I/Os completed (+1939) 00:35:57.178 00:35:58.156 QEMU NVMe Ctrl (12340 ): 4711 I/Os completed (+2772) 00:35:58.156 00:35:59.109 QEMU NVMe Ctrl (12340 ): 7907 I/Os completed (+3196) 00:35:59.109 00:36:00.043 QEMU NVMe Ctrl (12340 ): 11123 I/Os completed (+3216) 00:36:00.043 00:36:01.419 QEMU NVMe Ctrl (12340 ): 14363 I/Os completed (+3240) 00:36:01.419 00:36:01.985 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:01.985 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:01.985 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:01.985 [2024-07-23 15:29:57.275410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:36:01.985 Controller removed: QEMU NVMe Ctrl (12340 ) 00:36:01.985 [2024-07-23 15:29:57.276764] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:01.985 [2024-07-23 15:29:57.276839] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:01.985 [2024-07-23 15:29:57.276861] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:01.985 [2024-07-23 15:29:57.276882] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:01.985 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:36:01.985 [2024-07-23 15:29:57.281355] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:01.985 [2024-07-23 15:29:57.281399] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:01.985 [2024-07-23 15:29:57.281417] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:01.985 [2024-07-23 15:29:57.281436] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:01.985 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:36:01.985 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:01.985 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:01.985 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:01.985 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:02.243 00:36:02.243 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:02.243 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:02.243 15:29:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:36:02.243 Attaching to 0000:00:10.0 00:36:02.243 Attached to 0000:00:10.0 00:36:03.179 QEMU NVMe Ctrl (12340 ): 2916 I/Os completed (+2916) 00:36:03.179 00:36:04.114 QEMU NVMe Ctrl (12340 ): 6140 I/Os completed (+3224) 00:36:04.114 00:36:05.050 QEMU NVMe Ctrl (12340 ): 9248 I/Os completed (+3108) 00:36:05.050 00:36:06.428 QEMU NVMe Ctrl (12340 ): 12436 I/Os completed (+3188) 00:36:06.428 00:36:07.366 QEMU NVMe Ctrl (12340 ): 15580 I/Os completed (+3144) 00:36:07.366 00:36:08.301 QEMU NVMe Ctrl (12340 ): 18776 I/Os completed (+3196) 00:36:08.301 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:08.301 [2024-07-23 15:30:03.564931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:36:08.301 Controller removed: QEMU NVMe Ctrl (12340 ) 00:36:08.301 [2024-07-23 15:30:03.566110] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:08.301 [2024-07-23 15:30:03.566153] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:08.301 [2024-07-23 15:30:03.566174] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:08.301 [2024-07-23 15:30:03.566191] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:08.301 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:36:08.301 [2024-07-23 15:30:03.568024] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:08.301 [2024-07-23 15:30:03.568052] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:08.301 [2024-07-23 15:30:03.568075] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:08.301 [2024-07-23 15:30:03.568091] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:08.301 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:08.560 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:08.560 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:08.560 15:30:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:36:08.560 Attaching to 0000:00:10.0 00:36:08.560 Attached to 0000:00:10.0 00:36:09.161 QEMU NVMe Ctrl (12340 ): 1952 I/Os completed (+1952) 00:36:09.161 00:36:10.100 QEMU NVMe Ctrl (12340 ): 5160 I/Os completed (+3208) 00:36:10.100 00:36:11.038 QEMU NVMe Ctrl (12340 ): 8396 I/Os completed (+3236) 00:36:11.038 00:36:12.418 QEMU NVMe Ctrl (12340 ): 11618 I/Os completed (+3222) 00:36:12.418 00:36:13.356 QEMU NVMe Ctrl (12340 ): 14850 I/Os completed (+3232) 00:36:13.356 00:36:14.294 QEMU NVMe Ctrl (12340 ): 18074 I/Os completed (+3224) 00:36:14.294 00:36:14.554 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:36:14.554 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:14.554 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:14.554 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:14.554 [2024-07-23 15:30:09.861957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:36:14.554 Controller removed: QEMU NVMe Ctrl (12340 ) 00:36:14.554 [2024-07-23 15:30:09.863203] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:14.554 [2024-07-23 15:30:09.863249] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:14.554 [2024-07-23 15:30:09.863269] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:14.554 [2024-07-23 15:30:09.863293] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:14.554 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:36:14.554 [2024-07-23 15:30:09.865132] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:14.554 [2024-07-23 15:30:09.865166] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:14.554 [2024-07-23 15:30:09.865182] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:14.554 [2024-07-23 15:30:09.865201] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:14.554 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:36:14.554 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:14.813 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:14.813 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:14.813 15:30:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:14.813 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:14.813 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:14.813 15:30:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:36:14.813 Attaching to 0000:00:10.0 00:36:14.813 Attached to 0000:00:10.0 00:36:14.813 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:36:14.813 [2024-07-23 15:30:10.124223] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:36:21.380 15:30:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:36:21.380 15:30:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:21.380 15:30:16 sw_hotplug -- common/autotest_common.sh@715 -- # time=24.85 00:36:21.380 15:30:16 sw_hotplug -- common/autotest_common.sh@716 -- # echo 24.85 00:36:21.380 15:30:16 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:36:21.380 15:30:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.85 00:36:21.380 15:30:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.85 1 00:36:21.380 remove_attach_helper took 24.85s to complete (handling 1 nvme drive(s)) 15:30:16 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:36:27.950 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 131467 00:36:27.950 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (131467) - No such process 00:36:27.950 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 131467 00:36:27.950 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:36:27.950 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:36:27.950 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:36:27.950 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=131807 00:36:27.951 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:36:27.951 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:27.951 15:30:22 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 131807 00:36:27.951 15:30:22 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 131807 ']' 00:36:27.951 15:30:22 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.951 15:30:22 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:27.951 15:30:22 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.951 15:30:22 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:27.951 15:30:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:27.951 [2024-07-23 15:30:22.218538] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:36:27.951 [2024-07-23 15:30:22.218729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131807 ] 00:36:27.951 [2024-07-23 15:30:22.377446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.951 [2024-07-23 15:30:22.431575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:36:27.951 15:30:23 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:36:27.951 15:30:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:34.513 15:30:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.513 15:30:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:34.513 15:30:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:36:34.513 [2024-07-23 15:30:29.208347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:36:34.513 [2024-07-23 15:30:29.210093] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:34.513 [2024-07-23 15:30:29.210135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:34.513 [2024-07-23 15:30:29.210157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.513 [2024-07-23 15:30:29.210190] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:34.513 [2024-07-23 15:30:29.210202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:34.513 [2024-07-23 15:30:29.210218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.513 [2024-07-23 15:30:29.210231] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:34.513 [2024-07-23 15:30:29.210247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:34.513 [2024-07-23 15:30:29.210263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.513 [2024-07-23 15:30:29.210279] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:34.513 [2024-07-23 15:30:29.210290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:34.513 [2024-07-23 15:30:29.210305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:34.513 15:30:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.513 15:30:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:34.513 15:30:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:34.513 15:30:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:41.100 15:30:35 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.100 15:30:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:41.100 15:30:35 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:41.100 15:30:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:41.100 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:36:41.100 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:41.100 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:41.100 [2024-07-23 15:30:36.008345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:36:41.100 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:41.100 [2024-07-23 15:30:36.010182] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.100 [2024-07-23 15:30:36.010224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.100 [2024-07-23 15:30:36.010246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.100 [2024-07-23 15:30:36.010266] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.100 [2024-07-23 15:30:36.010282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.100 [2024-07-23 15:30:36.010295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.100 [2024-07-23 15:30:36.010312] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.100 [2024-07-23 15:30:36.010324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.100 [2024-07-23 15:30:36.010339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.100 [2024-07-23 15:30:36.010353] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:41.100 [2024-07-23 15:30:36.010371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:41.100 [2024-07-23 15:30:36.010384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:41.100 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:41.101 15:30:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.101 15:30:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:41.101 15:30:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:41.101 15:30:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:47.692 15:30:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.692 15:30:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:47.692 15:30:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:47.692 [2024-07-23 15:30:42.308404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:36:47.692 [2024-07-23 15:30:42.310311] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:47.692 [2024-07-23 15:30:42.310357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:47.692 [2024-07-23 15:30:42.310376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:47.692 [2024-07-23 15:30:42.310397] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:47.692 [2024-07-23 15:30:42.310410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:47.692 [2024-07-23 15:30:42.310425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:47.692 [2024-07-23 15:30:42.310437] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:47.692 [2024-07-23 15:30:42.310451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:47.692 [2024-07-23 15:30:42.310463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:47.692 [2024-07-23 15:30:42.310478] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:47.692 [2024-07-23 15:30:42.310489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:47.692 [2024-07-23 15:30:42.310503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:47.692 15:30:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.692 15:30:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:47.692 15:30:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:36:47.692 15:30:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@715 -- # time=25.53 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@716 -- # echo 25.53 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.53 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.53 1 00:36:54.271 remove_attach_helper took 25.53s to complete (handling 1 nvme drive(s)) 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:36:54.271 15:30:48 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:36:54.271 15:30:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:36:59.541 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:36:59.541 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:36:59.542 15:30:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:59.542 15:30:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:36:59.542 [2024-07-23 15:30:54.766727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:36:59.542 [2024-07-23 15:30:54.768577] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:59.542 [2024-07-23 15:30:54.768619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:36:59.542 [2024-07-23 15:30:54.768649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.542 [2024-07-23 15:30:54.768667] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:59.542 [2024-07-23 15:30:54.768682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:36:59.542 15:30:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:59.542 [2024-07-23 15:30:54.768695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.542 [2024-07-23 15:30:54.768711] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:59.542 [2024-07-23 15:30:54.768722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:36:59.542 [2024-07-23 15:30:54.768737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.542 [2024-07-23 15:30:54.768750] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:36:59.542 [2024-07-23 15:30:54.768764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:36:59.542 [2024-07-23 15:30:54.768776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:36:59.542 15:30:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:37:00.111 15:30:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.111 15:30:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:00.111 15:30:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:37:00.111 15:30:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:37:06.682 15:31:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.682 15:31:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:06.682 15:31:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:37:06.682 15:31:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.682 15:31:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:06.682 15:31:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:37:06.682 15:31:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:37:06.682 [2024-07-23 15:31:01.666808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:37:06.683 [2024-07-23 15:31:01.668561] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:37:06.683 [2024-07-23 15:31:01.668600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:37:06.683 [2024-07-23 15:31:01.668618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:06.683 [2024-07-23 15:31:01.668646] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:37:06.683 [2024-07-23 15:31:01.668659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:37:06.683 [2024-07-23 15:31:01.668675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:06.683 [2024-07-23 15:31:01.668688] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:37:06.683 [2024-07-23 15:31:01.668704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:37:06.683 [2024-07-23 15:31:01.668717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:06.683 [2024-07-23 15:31:01.668733] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:37:06.683 [2024-07-23 15:31:01.668745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:37:06.683 [2024-07-23 15:31:01.668760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:06.942 15:31:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.942 15:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:06.942 15:31:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:37:06.942 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:37:07.201 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:37:07.201 15:31:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:13.770 15:31:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.770 15:31:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:13.770 15:31:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:37:13.770 [2024-07-23 15:31:08.466881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:37:13.770 [2024-07-23 15:31:08.468984] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:37:13.770 [2024-07-23 15:31:08.469060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.770 [2024-07-23 15:31:08.469125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.770 [2024-07-23 15:31:08.469197] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:37:13.770 [2024-07-23 15:31:08.469277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.770 [2024-07-23 15:31:08.469401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.770 [2024-07-23 15:31:08.469521] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:37:13.770 [2024-07-23 15:31:08.469577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.770 [2024-07-23 15:31:08.469707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.770 [2024-07-23 15:31:08.469766] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:37:13.770 [2024-07-23 15:31:08.469858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.770 [2024-07-23 15:31:08.469966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:37:13.770 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:13.771 15:31:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.771 15:31:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:37:13.771 15:31:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:37:13.771 15:31:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:37:20.339 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:37:20.339 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:37:20.339 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:37:20.339 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:37:20.339 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:37:20.339 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:37:20.339 15:31:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:20.339 15:31:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:20.340 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:37:20.340 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@715 -- # time=26.13 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@716 -- # echo 26.13 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:37:20.340 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=26.13 00:37:20.340 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 26.13 1 00:37:20.340 remove_attach_helper took 26.13s to complete (handling 1 nvme drive(s)) 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:37:20.340 15:31:14 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 131807 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 131807 ']' 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 131807 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131807 00:37:20.340 killing process with pid 131807 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131807' 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@967 -- # kill 131807 00:37:20.340 15:31:14 sw_hotplug -- common/autotest_common.sh@972 -- # wait 131807 00:37:20.340 15:31:15 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:20.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:37:20.340 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:21.277 00:37:21.277 real 1m28.481s 00:37:21.277 user 1m1.505s 00:37:21.277 sys 0m17.891s 00:37:21.277 15:31:16 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:21.277 ************************************ 00:37:21.277 END TEST sw_hotplug 00:37:21.277 ************************************ 00:37:21.277 15:31:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:37:21.277 15:31:16 -- common/autotest_common.sh@1142 -- # return 0 00:37:21.277 15:31:16 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:37:21.277 15:31:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:37:21.277 15:31:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:21.277 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:37:21.277 15:31:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:21.277 15:31:16 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:21.277 15:31:16 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:21.277 15:31:16 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:21.277 15:31:16 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:37:21.277 15:31:16 -- spdk/autotest.sh@376 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:37:21.277 15:31:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:21.277 15:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:21.277 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:37:21.277 ************************************ 00:37:21.277 START TEST blockdev_raid5f 00:37:21.277 ************************************ 00:37:21.277 15:31:16 blockdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:37:21.277 * Looking for test storage... 00:37:21.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:37:21.277 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=132630 00:37:21.278 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:37:21.278 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:21.278 15:31:16 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 132630 00:37:21.278 15:31:16 blockdev_raid5f -- common/autotest_common.sh@829 -- # '[' -z 132630 ']' 00:37:21.278 15:31:16 blockdev_raid5f -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.278 15:31:16 blockdev_raid5f -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:21.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.278 15:31:16 blockdev_raid5f -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.278 15:31:16 blockdev_raid5f -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:21.278 15:31:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:21.537 [2024-07-23 15:31:16.718932] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:21.537 [2024-07-23 15:31:16.719127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132630 ] 00:37:21.537 [2024-07-23 15:31:16.876225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.537 [2024-07-23 15:31:16.934559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@862 -- # return 0 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:22.498 Malloc0 00:37:22.498 Malloc1 00:37:22.498 Malloc2 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3552714e-e886-4ca3-86df-870963fd7337"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3552714e-e886-4ca3-86df-870963fd7337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3552714e-e886-4ca3-86df-870963fd7337",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "700c827c-8335-4b9e-b3ab-8d160774bdb2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "0f54d367-250f-4347-8e7e-8a22249a2d8a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "917bdbff-d452-4a48-b28b-0e2623f76599",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:37:22.498 15:31:17 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 132630 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@948 -- # '[' -z 132630 ']' 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@952 -- # kill -0 132630 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@953 -- # uname 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132630 00:37:22.498 killing process with pid 132630 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132630' 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@967 -- # kill 132630 00:37:22.498 15:31:17 blockdev_raid5f -- common/autotest_common.sh@972 -- # wait 132630 00:37:23.067 15:31:18 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:23.067 15:31:18 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:37:23.067 15:31:18 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:37:23.067 15:31:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:23.067 15:31:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:23.067 ************************************ 00:37:23.067 START TEST bdev_hello_world 00:37:23.067 ************************************ 00:37:23.067 15:31:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:37:23.068 [2024-07-23 15:31:18.386931] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:23.068 [2024-07-23 15:31:18.387116] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132664 ] 00:37:23.327 [2024-07-23 15:31:18.540058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.327 [2024-07-23 15:31:18.588817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.586 [2024-07-23 15:31:18.778461] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:37:23.586 [2024-07-23 15:31:18.778540] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:37:23.586 [2024-07-23 15:31:18.778570] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:37:23.586 [2024-07-23 15:31:18.779002] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:37:23.586 [2024-07-23 15:31:18.779190] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:37:23.586 [2024-07-23 15:31:18.779222] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:37:23.586 [2024-07-23 15:31:18.779331] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:37:23.586 00:37:23.586 [2024-07-23 15:31:18.779367] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:37:23.844 00:37:23.844 real 0m0.722s 00:37:23.844 user 0m0.393s 00:37:23.844 sys 0m0.219s 00:37:23.844 15:31:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:23.844 15:31:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:37:23.844 ************************************ 00:37:23.844 END TEST bdev_hello_world 00:37:23.844 ************************************ 00:37:23.844 15:31:19 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:37:23.844 15:31:19 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:37:23.844 15:31:19 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:23.844 15:31:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:23.844 15:31:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:23.844 ************************************ 00:37:23.844 START TEST bdev_bounds 00:37:23.844 ************************************ 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=132691 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:37:23.844 Process bdevio pid: 132691 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 132691' 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 132691 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 132691 ']' 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:23.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:23.844 15:31:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:37:23.844 [2024-07-23 15:31:19.180494] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:23.844 [2024-07-23 15:31:19.180701] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132691 ] 00:37:24.103 [2024-07-23 15:31:19.333471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:24.103 [2024-07-23 15:31:19.380961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.103 [2024-07-23 15:31:19.381010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:24.103 [2024-07-23 15:31:19.381042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.668 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:24.668 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:37:24.668 15:31:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:37:24.926 I/O targets: 00:37:24.926 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:37:24.926 00:37:24.926 00:37:24.926 CUnit - A unit testing framework for C - Version 2.1-3 00:37:24.926 http://cunit.sourceforge.net/ 00:37:24.926 00:37:24.926 00:37:24.926 Suite: bdevio tests on: raid5f 00:37:24.926 Test: blockdev write read block ...passed 00:37:24.926 Test: blockdev write zeroes read block ...passed 00:37:24.926 Test: blockdev write zeroes read no split ...passed 00:37:24.926 Test: blockdev write zeroes read split ...passed 00:37:24.926 Test: blockdev write zeroes read split partial ...passed 00:37:24.926 Test: blockdev reset ...passed 00:37:24.926 Test: blockdev write read 8 blocks ...passed 00:37:24.926 Test: blockdev write read size > 128k ...passed 00:37:24.926 Test: blockdev write read invalid size ...passed 00:37:24.926 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:24.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:24.926 Test: blockdev write read max offset ...passed 00:37:24.926 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:24.926 Test: blockdev writev readv 8 blocks ...passed 00:37:24.926 Test: blockdev writev readv 30 x 1block ...passed 00:37:24.926 Test: blockdev writev readv block ...passed 00:37:24.926 Test: blockdev writev readv size > 128k ...passed 00:37:24.926 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:24.926 Test: blockdev comparev and writev ...passed 00:37:24.926 Test: blockdev nvme passthru rw ...passed 00:37:24.926 Test: blockdev nvme passthru vendor specific ...passed 00:37:24.927 Test: blockdev nvme admin passthru ...passed 00:37:24.927 Test: blockdev copy ...passed 00:37:24.927 00:37:24.927 Run Summary: Type Total Ran Passed Failed Inactive 00:37:24.927 suites 1 1 n/a 0 0 00:37:24.927 tests 23 23 23 0 0 00:37:24.927 asserts 130 130 130 0 n/a 00:37:24.927 00:37:24.927 Elapsed time = 0.328 seconds 00:37:24.927 0 00:37:24.927 15:31:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 132691 00:37:24.927 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 132691 ']' 00:37:24.927 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 132691 00:37:24.927 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:37:24.927 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:24.927 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132691 00:37:25.185 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:25.185 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:25.185 killing process with pid 132691 00:37:25.185 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132691' 00:37:25.185 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@967 -- # kill 132691 00:37:25.185 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # wait 132691 00:37:25.443 15:31:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:37:25.443 00:37:25.443 real 0m1.516s 00:37:25.443 user 0m3.713s 00:37:25.443 sys 0m0.382s 00:37:25.443 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:25.443 15:31:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:37:25.443 ************************************ 00:37:25.443 END TEST bdev_bounds 00:37:25.443 ************************************ 00:37:25.443 15:31:20 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:37:25.443 15:31:20 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:37:25.443 15:31:20 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:37:25.443 15:31:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:25.443 15:31:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:25.443 ************************************ 00:37:25.443 START TEST bdev_nbd 00:37:25.443 ************************************ 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:37:25.443 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=132741 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 132741 /var/tmp/spdk-nbd.sock 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 132741 ']' 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:25.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:25.444 15:31:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:37:25.444 [2024-07-23 15:31:20.760256] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:25.444 [2024-07-23 15:31:20.760475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:25.702 [2024-07-23 15:31:20.910661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.702 [2024-07-23 15:31:20.957680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:37:26.269 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:26.527 1+0 records in 00:37:26.527 1+0 records out 00:37:26.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217146 s, 18.9 MB/s 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:37:26.527 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:26.786 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:37:26.786 { 00:37:26.786 "nbd_device": "/dev/nbd0", 00:37:26.786 "bdev_name": "raid5f" 00:37:26.786 } 00:37:26.786 ]' 00:37:26.786 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:37:26.786 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:37:26.786 15:31:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:37:26.786 { 00:37:26.786 "nbd_device": "/dev/nbd0", 00:37:26.786 "bdev_name": "raid5f" 00:37:26.786 } 00:37:26.786 ]' 00:37:26.786 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:26.786 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:26.786 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:26.786 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:26.786 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:26.786 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:26.786 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:27.044 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:37:27.303 /dev/nbd0 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:27.303 1+0 records in 00:37:27.303 1+0 records out 00:37:27.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264078 s, 15.5 MB/s 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:37:27.303 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:37:27.562 { 00:37:27.562 "nbd_device": "/dev/nbd0", 00:37:27.562 "bdev_name": "raid5f" 00:37:27.562 } 00:37:27.562 ]' 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:37:27.562 { 00:37:27.562 "nbd_device": "/dev/nbd0", 00:37:27.562 "bdev_name": "raid5f" 00:37:27.562 } 00:37:27.562 ]' 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:37:27.562 256+0 records in 00:37:27.562 256+0 records out 00:37:27.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117956 s, 88.9 MB/s 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:37:27.562 256+0 records in 00:37:27.562 256+0 records out 00:37:27.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283388 s, 37.0 MB/s 00:37:27.562 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:37:27.820 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:37:27.820 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:27.820 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:37:27.820 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:27.820 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:37:27.820 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:37:27.820 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:27.820 15:31:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:27.820 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:37:28.079 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:37:28.337 malloc_lvol_verify 00:37:28.337 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:37:28.595 eadff584-2569-4d9f-98ac-85c746184883 00:37:28.595 15:31:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:37:28.853 c4c78220-217a-4e97-9554-197163e42d7b 00:37:28.853 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:37:29.111 /dev/nbd0 00:37:29.111 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:37:29.111 mke2fs 1.47.0 (5-Feb-2023) 00:37:29.112 00:37:29.112 Filesystem too small for a journal 00:37:29.112 Discarding device blocks: 0/1024 done 00:37:29.112 Creating filesystem with 1024 4k blocks and 1024 inodes 00:37:29.112 00:37:29.112 Allocating group tables: 0/1 done 00:37:29.112 Writing inode tables: 0/1 done 00:37:29.112 Writing superblocks and filesystem accounting information: 0/1 done 00:37:29.112 00:37:29.112 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:37:29.112 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:29.112 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:29.112 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:29.112 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:29.112 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:29.112 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:29.112 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 132741 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 132741 ']' 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 132741 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132741 00:37:29.370 killing process with pid 132741 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132741' 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@967 -- # kill 132741 00:37:29.370 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # wait 132741 00:37:29.629 15:31:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:37:29.629 ************************************ 00:37:29.629 END TEST bdev_nbd 00:37:29.629 ************************************ 00:37:29.629 00:37:29.629 real 0m4.261s 00:37:29.629 user 0m6.180s 00:37:29.629 sys 0m1.291s 00:37:29.629 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:29.629 15:31:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:37:29.629 15:31:24 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:37:29.629 15:31:24 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:37:29.629 15:31:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:37:29.629 15:31:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:37:29.629 15:31:24 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:37:29.629 15:31:24 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:29.629 15:31:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:29.629 15:31:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:29.629 ************************************ 00:37:29.629 START TEST bdev_fio 00:37:29.629 ************************************ 00:37:29.629 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:29.629 15:31:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:37:29.888 ************************************ 00:37:29.888 START TEST bdev_fio_rw_verify 00:37:29.888 ************************************ 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:29.888 15:31:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:37:29.888 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:37:29.888 fio-3.35 00:37:29.888 Starting 1 thread 00:37:42.126 00:37:42.126 job_raid5f: (groupid=0, jobs=1): err= 0: pid=132937: Tue Jul 23 15:31:35 2024 00:37:42.126 read: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(440MiB/10001msec) 00:37:42.126 slat (nsec): min=19281, max=88789, avg=21261.12, stdev=2577.77 00:37:42.126 clat (usec): min=10, max=403, avg=142.38, stdev=51.04 00:37:42.126 lat (usec): min=31, max=424, avg=163.64, stdev=51.40 00:37:42.126 clat percentiles (usec): 00:37:42.126 | 50.000th=[ 145], 99.000th=[ 241], 99.900th=[ 310], 99.990th=[ 351], 00:37:42.126 | 99.999th=[ 383] 00:37:42.126 write: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(455MiB/9882msec); 0 zone resets 00:37:42.126 slat (usec): min=8, max=224, avg=18.09, stdev= 3.61 00:37:42.126 clat (usec): min=61, max=753, avg=323.36, stdev=45.73 00:37:42.126 lat (usec): min=78, max=901, avg=341.44, stdev=46.74 00:37:42.126 clat percentiles (usec): 00:37:42.126 | 50.000th=[ 326], 99.000th=[ 445], 99.900th=[ 619], 99.990th=[ 685], 00:37:42.126 | 99.999th=[ 742] 00:37:42.126 bw ( KiB/s): min=42328, max=50712, per=99.10%, avg=46684.21, stdev=2239.20, samples=19 00:37:42.126 iops : min=10582, max=12678, avg=11671.05, stdev=559.80, samples=19 00:37:42.126 lat (usec) : 20=0.01%, 50=0.01%, 100=11.90%, 250=39.80%, 500=48.00% 00:37:42.126 lat (usec) : 750=0.29%, 1000=0.01% 00:37:42.126 cpu : usr=99.43%, sys=0.55%, ctx=61, majf=0, minf=12414 00:37:42.126 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:42.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.126 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.126 issued rwts: total=112652,116378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.126 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:42.126 00:37:42.126 Run status group 0 (all jobs): 00:37:42.126 READ: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=440MiB (461MB), run=10001-10001msec 00:37:42.126 WRITE: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=455MiB (477MB), run=9882-9882msec 00:37:42.126 ----------------------------------------------------- 00:37:42.126 Suppressions used: 00:37:42.126 count bytes template 00:37:42.126 1 7 /usr/src/fio/parse.c 00:37:42.126 227 21792 /usr/src/fio/iolog.c 00:37:42.126 1 904 libcrypto.so 00:37:42.126 ----------------------------------------------------- 00:37:42.126 00:37:42.126 00:37:42.126 real 0m11.289s 00:37:42.126 user 0m12.285s 00:37:42.126 sys 0m0.737s 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:37:42.126 ************************************ 00:37:42.126 END TEST bdev_fio_rw_verify 00:37:42.126 ************************************ 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3552714e-e886-4ca3-86df-870963fd7337"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3552714e-e886-4ca3-86df-870963fd7337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3552714e-e886-4ca3-86df-870963fd7337",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "700c827c-8335-4b9e-b3ab-8d160774bdb2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "0f54d367-250f-4347-8e7e-8a22249a2d8a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "917bdbff-d452-4a48-b28b-0e2623f76599",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:37:42.126 /home/vagrant/spdk_repo/spdk 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:37:42.126 00:37:42.126 real 0m11.434s 00:37:42.126 user 0m12.328s 00:37:42.126 sys 0m0.847s 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:42.126 15:31:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:37:42.126 ************************************ 00:37:42.126 END TEST bdev_fio 00:37:42.126 ************************************ 00:37:42.126 15:31:36 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:37:42.126 15:31:36 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:42.126 15:31:36 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:37:42.126 15:31:36 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:37:42.126 15:31:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:42.126 15:31:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:42.126 ************************************ 00:37:42.126 START TEST bdev_verify 00:37:42.126 ************************************ 00:37:42.126 15:31:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:37:42.126 [2024-07-23 15:31:36.553241] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:42.127 [2024-07-23 15:31:36.553405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133085 ] 00:37:42.127 [2024-07-23 15:31:36.691296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:42.127 [2024-07-23 15:31:36.739113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.127 [2024-07-23 15:31:36.739143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.127 Running I/O for 5 seconds... 00:37:47.392 00:37:47.392 Latency(us) 00:37:47.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.392 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:47.392 Verification LBA range: start 0x0 length 0x2000 00:37:47.392 raid5f : 5.01 7194.69 28.10 0.00 0.00 26559.86 372.54 24466.77 00:37:47.392 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:47.392 Verification LBA range: start 0x2000 length 0x2000 00:37:47.392 raid5f : 5.01 7192.67 28.10 0.00 0.00 26821.67 197.97 24466.77 00:37:47.392 =================================================================================================================== 00:37:47.392 Total : 14387.36 56.20 0.00 0.00 26690.77 197.97 24466.77 00:37:47.392 00:37:47.392 real 0m5.722s 00:37:47.392 user 0m10.726s 00:37:47.392 sys 0m0.236s 00:37:47.392 15:31:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:47.392 15:31:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:37:47.392 ************************************ 00:37:47.392 END TEST bdev_verify 00:37:47.392 ************************************ 00:37:47.392 15:31:42 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:37:47.392 15:31:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:37:47.392 15:31:42 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:37:47.392 15:31:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:47.392 15:31:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:47.392 ************************************ 00:37:47.392 START TEST bdev_verify_big_io 00:37:47.392 ************************************ 00:37:47.392 15:31:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:37:47.392 [2024-07-23 15:31:42.342798] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:47.392 [2024-07-23 15:31:42.343000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133167 ] 00:37:47.392 [2024-07-23 15:31:42.492945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:47.392 [2024-07-23 15:31:42.539319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.392 [2024-07-23 15:31:42.539401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.392 Running I/O for 5 seconds... 00:37:52.654 00:37:52.654 Latency(us) 00:37:52.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.654 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:52.654 Verification LBA range: start 0x0 length 0x200 00:37:52.654 raid5f : 5.16 443.17 27.70 0.00 0.00 7203862.40 191.15 305585.01 00:37:52.654 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:52.654 Verification LBA range: start 0x200 length 0x200 00:37:52.654 raid5f : 5.16 443.31 27.71 0.00 0.00 7200522.09 219.43 305585.01 00:37:52.654 =================================================================================================================== 00:37:52.654 Total : 886.49 55.41 0.00 0.00 7202192.25 191.15 305585.01 00:37:52.912 00:37:52.912 real 0m5.885s 00:37:52.912 user 0m11.022s 00:37:52.912 sys 0m0.245s 00:37:52.912 15:31:48 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:52.912 15:31:48 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:37:52.912 ************************************ 00:37:52.912 END TEST bdev_verify_big_io 00:37:52.912 ************************************ 00:37:52.912 15:31:48 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:37:52.912 15:31:48 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:52.912 15:31:48 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:37:52.912 15:31:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:52.912 15:31:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:52.912 ************************************ 00:37:52.912 START TEST bdev_write_zeroes 00:37:52.912 ************************************ 00:37:52.912 15:31:48 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:52.912 [2024-07-23 15:31:48.288573] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:52.912 [2024-07-23 15:31:48.288778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133238 ] 00:37:53.170 [2024-07-23 15:31:48.438996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.170 [2024-07-23 15:31:48.486559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.428 Running I/O for 1 seconds... 00:37:54.362 00:37:54.362 Latency(us) 00:37:54.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.362 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:54.362 raid5f : 1.01 26384.89 103.07 0.00 0.00 4835.69 1568.18 5929.45 00:37:54.362 =================================================================================================================== 00:37:54.362 Total : 26384.89 103.07 0.00 0.00 4835.69 1568.18 5929.45 00:37:54.620 00:37:54.620 real 0m1.718s 00:37:54.620 user 0m1.387s 00:37:54.620 sys 0m0.223s 00:37:54.620 15:31:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:54.620 15:31:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:37:54.620 ************************************ 00:37:54.620 END TEST bdev_write_zeroes 00:37:54.620 ************************************ 00:37:54.620 15:31:49 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:37:54.620 15:31:49 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:54.620 15:31:49 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:37:54.620 15:31:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:54.620 15:31:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:54.620 ************************************ 00:37:54.620 START TEST bdev_json_nonenclosed 00:37:54.620 ************************************ 00:37:54.620 15:31:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:54.878 [2024-07-23 15:31:50.057123] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:54.878 [2024-07-23 15:31:50.057267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133279 ] 00:37:54.878 [2024-07-23 15:31:50.199248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.878 [2024-07-23 15:31:50.244941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.878 [2024-07-23 15:31:50.245038] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:37:54.878 [2024-07-23 15:31:50.245072] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:54.878 [2024-07-23 15:31:50.245086] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:55.136 00:37:55.136 real 0m0.354s 00:37:55.136 user 0m0.152s 00:37:55.136 sys 0m0.101s 00:37:55.136 15:31:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:37:55.136 15:31:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:55.136 15:31:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:37:55.136 ************************************ 00:37:55.136 END TEST bdev_json_nonenclosed 00:37:55.136 ************************************ 00:37:55.136 15:31:50 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:37:55.136 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@781 -- # true 00:37:55.136 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:55.136 15:31:50 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:37:55.136 15:31:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:55.136 15:31:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:55.136 ************************************ 00:37:55.136 START TEST bdev_json_nonarray 00:37:55.136 ************************************ 00:37:55.136 15:31:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:55.136 [2024-07-23 15:31:50.467165] Starting SPDK v24.09-pre git sha1 b8378f94e / DPDK 22.11.4 initialization... 00:37:55.136 [2024-07-23 15:31:50.467337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133306 ] 00:37:55.394 [2024-07-23 15:31:50.607615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.394 [2024-07-23 15:31:50.652366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.394 [2024-07-23 15:31:50.652494] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:37:55.394 [2024-07-23 15:31:50.652529] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:55.394 [2024-07-23 15:31:50.652549] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:55.394 00:37:55.394 real 0m0.357s 00:37:55.394 user 0m0.146s 00:37:55.394 sys 0m0.111s 00:37:55.394 15:31:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:37:55.394 15:31:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:55.394 15:31:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:37:55.394 ************************************ 00:37:55.394 END TEST bdev_json_nonarray 00:37:55.394 ************************************ 00:37:55.394 15:31:50 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:37:55.395 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@784 -- # true 00:37:55.395 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:37:55.395 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:37:55.395 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:37:55.395 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:37:55.395 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:37:55.395 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:37:55.395 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:55.674 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:37:55.674 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:37:55.674 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:37:55.674 15:31:50 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:37:55.674 00:37:55.674 real 0m34.295s 00:37:55.674 user 0m47.948s 00:37:55.674 sys 0m4.647s 00:37:55.674 15:31:50 blockdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:55.674 15:31:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:55.674 ************************************ 00:37:55.674 END TEST blockdev_raid5f 00:37:55.674 ************************************ 00:37:55.674 15:31:50 -- common/autotest_common.sh@1142 -- # return 0 00:37:55.674 15:31:50 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:55.674 15:31:50 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:55.674 15:31:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:55.674 15:31:50 -- common/autotest_common.sh@10 -- # set +x 00:37:55.674 15:31:50 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:55.674 15:31:50 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:55.674 15:31:50 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:55.674 15:31:50 -- common/autotest_common.sh@10 -- # set +x 00:37:57.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:37:57.588 Waiting for block devices as requested 00:37:57.588 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:58.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:37:58.154 Cleaning 00:37:58.154 Removing: /var/run/dpdk/spdk0/config 00:37:58.414 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:58.414 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:58.414 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:58.414 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:58.414 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:58.414 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:58.414 Removing: /dev/shm/spdk_tgt_trace.pid80502 00:37:58.414 Removing: /var/run/dpdk/spdk0 00:37:58.414 Removing: /var/run/dpdk/spdk_pid100780 00:37:58.414 Removing: /var/run/dpdk/spdk_pid101238 00:37:58.414 Removing: /var/run/dpdk/spdk_pid101416 00:37:58.414 Removing: /var/run/dpdk/spdk_pid103537 00:37:58.414 Removing: /var/run/dpdk/spdk_pid104004 00:37:58.414 Removing: /var/run/dpdk/spdk_pid104174 00:37:58.414 Removing: /var/run/dpdk/spdk_pid106253 00:37:58.414 Removing: /var/run/dpdk/spdk_pid107000 00:37:58.414 Removing: /var/run/dpdk/spdk_pid107172 00:37:58.414 Removing: /var/run/dpdk/spdk_pid107342 00:37:58.414 Removing: /var/run/dpdk/spdk_pid107827 00:37:58.414 Removing: /var/run/dpdk/spdk_pid108668 00:37:58.414 Removing: /var/run/dpdk/spdk_pid109097 00:37:58.414 Removing: /var/run/dpdk/spdk_pid109879 00:37:58.414 Removing: /var/run/dpdk/spdk_pid110367 00:37:58.414 Removing: /var/run/dpdk/spdk_pid111218 00:37:58.414 Removing: /var/run/dpdk/spdk_pid111676 00:37:58.414 Removing: /var/run/dpdk/spdk_pid114199 00:37:58.414 Removing: /var/run/dpdk/spdk_pid114846 00:37:58.414 Removing: /var/run/dpdk/spdk_pid115318 00:37:58.414 Removing: /var/run/dpdk/spdk_pid118054 00:37:58.414 Removing: /var/run/dpdk/spdk_pid118782 00:37:58.414 Removing: /var/run/dpdk/spdk_pid119356 00:37:58.414 Removing: /var/run/dpdk/spdk_pid120571 00:37:58.414 Removing: /var/run/dpdk/spdk_pid121022 00:37:58.414 Removing: /var/run/dpdk/spdk_pid122135 00:37:58.414 Removing: /var/run/dpdk/spdk_pid122589 00:37:58.414 Removing: /var/run/dpdk/spdk_pid123690 00:37:58.414 Removing: /var/run/dpdk/spdk_pid124147 00:37:58.414 Removing: /var/run/dpdk/spdk_pid124891 00:37:58.414 Removing: /var/run/dpdk/spdk_pid124927 00:37:58.414 Removing: /var/run/dpdk/spdk_pid124961 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125004 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125115 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125247 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125449 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125707 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125720 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125752 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125771 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125786 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125805 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125818 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125833 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125852 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125870 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125886 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125905 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125917 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125933 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125952 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125970 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125980 00:37:58.414 Removing: /var/run/dpdk/spdk_pid125999 00:37:58.414 Removing: /var/run/dpdk/spdk_pid126013 00:37:58.414 Removing: /var/run/dpdk/spdk_pid126033 00:37:58.414 Removing: /var/run/dpdk/spdk_pid126068 00:37:58.414 Removing: /var/run/dpdk/spdk_pid126077 00:37:58.414 Removing: /var/run/dpdk/spdk_pid126110 00:37:58.414 Removing: /var/run/dpdk/spdk_pid126170 00:37:58.414 Removing: /var/run/dpdk/spdk_pid126197 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126208 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126237 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126248 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126261 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126298 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126311 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126338 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126347 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126361 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126364 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126378 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126381 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126394 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126398 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126430 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126459 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126468 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126499 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126504 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126518 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126560 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126571 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126600 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126607 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126617 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126630 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126634 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126643 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126651 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126660 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126735 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126778 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126881 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126888 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126925 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126965 00:37:58.689 Removing: /var/run/dpdk/spdk_pid126986 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127006 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127023 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127053 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127070 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127138 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127184 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127217 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127446 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127542 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127575 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127652 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127711 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127742 00:37:58.689 Removing: /var/run/dpdk/spdk_pid127950 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128032 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128109 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128150 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128171 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128236 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128618 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128645 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128920 00:37:58.689 Removing: /var/run/dpdk/spdk_pid128992 00:37:58.689 Removing: /var/run/dpdk/spdk_pid129079 00:37:58.689 Removing: /var/run/dpdk/spdk_pid129111 00:37:58.689 Removing: /var/run/dpdk/spdk_pid129137 00:37:58.689 Removing: /var/run/dpdk/spdk_pid129161 00:37:58.689 Removing: /var/run/dpdk/spdk_pid130314 00:37:58.689 Removing: /var/run/dpdk/spdk_pid130426 00:37:58.689 Removing: /var/run/dpdk/spdk_pid130430 00:37:58.689 Removing: /var/run/dpdk/spdk_pid130448 00:37:58.689 Removing: /var/run/dpdk/spdk_pid130890 00:37:58.689 Removing: /var/run/dpdk/spdk_pid130967 00:37:58.689 Removing: /var/run/dpdk/spdk_pid131807 00:37:58.689 Removing: /var/run/dpdk/spdk_pid132630 00:37:58.689 Removing: /var/run/dpdk/spdk_pid132664 00:37:58.689 Removing: /var/run/dpdk/spdk_pid132691 00:37:58.689 Removing: /var/run/dpdk/spdk_pid132933 00:37:58.689 Removing: /var/run/dpdk/spdk_pid133085 00:37:58.689 Removing: /var/run/dpdk/spdk_pid133167 00:37:58.689 Removing: /var/run/dpdk/spdk_pid133238 00:37:58.689 Removing: /var/run/dpdk/spdk_pid133279 00:37:58.689 Removing: /var/run/dpdk/spdk_pid133306 00:37:58.689 Removing: /var/run/dpdk/spdk_pid80340 00:37:58.947 Removing: /var/run/dpdk/spdk_pid80502 00:37:58.947 Removing: /var/run/dpdk/spdk_pid80803 00:37:58.947 Removing: /var/run/dpdk/spdk_pid80885 00:37:58.947 Removing: /var/run/dpdk/spdk_pid80914 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81031 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81049 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81170 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81414 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81565 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81638 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81710 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81796 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81874 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81914 00:37:58.947 Removing: /var/run/dpdk/spdk_pid81945 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82006 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82108 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82575 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82622 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82680 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82691 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82765 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82781 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82850 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82866 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82914 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82932 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82976 00:37:58.948 Removing: /var/run/dpdk/spdk_pid82994 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83118 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83155 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83186 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83257 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83316 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83336 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83403 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83433 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83473 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83504 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83540 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83575 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83611 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83641 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83682 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83712 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83751 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83796 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83832 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83867 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83903 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83933 00:37:58.948 Removing: /var/run/dpdk/spdk_pid83974 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84007 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84050 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84081 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84118 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84183 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84277 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84420 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84472 00:37:58.948 Removing: /var/run/dpdk/spdk_pid84503 00:37:58.948 Removing: /var/run/dpdk/spdk_pid85631 00:37:58.948 Removing: /var/run/dpdk/spdk_pid85818 00:37:58.948 Removing: /var/run/dpdk/spdk_pid85989 00:37:58.948 Removing: /var/run/dpdk/spdk_pid86071 00:37:58.948 Removing: /var/run/dpdk/spdk_pid86169 00:37:58.948 Removing: /var/run/dpdk/spdk_pid86217 00:37:58.948 Removing: /var/run/dpdk/spdk_pid86242 00:37:58.948 Removing: /var/run/dpdk/spdk_pid86266 00:37:58.948 Removing: /var/run/dpdk/spdk_pid86669 00:37:58.948 Removing: /var/run/dpdk/spdk_pid86741 00:37:58.948 Removing: /var/run/dpdk/spdk_pid86831 00:37:59.205 Removing: /var/run/dpdk/spdk_pid86867 00:37:59.205 Removing: /var/run/dpdk/spdk_pid87984 00:37:59.205 Removing: /var/run/dpdk/spdk_pid88295 00:37:59.205 Removing: /var/run/dpdk/spdk_pid88453 00:37:59.205 Removing: /var/run/dpdk/spdk_pid89253 00:37:59.205 Removing: /var/run/dpdk/spdk_pid89570 00:37:59.205 Removing: /var/run/dpdk/spdk_pid89730 00:37:59.205 Removing: /var/run/dpdk/spdk_pid90539 00:37:59.205 Removing: /var/run/dpdk/spdk_pid90998 00:37:59.205 Removing: /var/run/dpdk/spdk_pid91158 00:37:59.205 Removing: /var/run/dpdk/spdk_pid93033 00:37:59.205 Removing: /var/run/dpdk/spdk_pid93447 00:37:59.205 Removing: /var/run/dpdk/spdk_pid93610 00:37:59.205 Removing: /var/run/dpdk/spdk_pid95461 00:37:59.205 Removing: /var/run/dpdk/spdk_pid95878 00:37:59.205 Removing: /var/run/dpdk/spdk_pid96047 00:37:59.205 Removing: /var/run/dpdk/spdk_pid97898 00:37:59.205 Removing: /var/run/dpdk/spdk_pid98548 00:37:59.205 Removing: /var/run/dpdk/spdk_pid98710 00:37:59.205 Clean 00:37:59.205 15:31:54 -- common/autotest_common.sh@1451 -- # return 0 00:37:59.205 15:31:54 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:59.205 15:31:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:59.205 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:37:59.205 15:31:54 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:59.205 15:31:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:59.205 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:37:59.205 15:31:54 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:59.463 15:31:54 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:59.463 15:31:54 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:59.463 15:31:54 -- spdk/autotest.sh@391 -- # hash lcov 00:37:59.463 15:31:54 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:59.463 15:31:54 -- spdk/autotest.sh@393 -- # hostname 00:37:59.463 15:31:54 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:59.463 geninfo: WARNING: invalid characters removed from testname! 00:38:55.680 15:32:49 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:58.972 15:32:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:02.255 15:32:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:05.541 15:33:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:08.115 15:33:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:11.413 15:33:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:13.938 15:33:08 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:13.938 15:33:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:13.938 15:33:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:13.938 15:33:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:13.938 15:33:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:13.938 15:33:08 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:13.938 15:33:08 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:13.938 15:33:08 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:13.938 15:33:08 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:13.938 15:33:08 -- paths/export.sh@6 -- $ export PATH 00:39:13.938 15:33:08 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:13.938 15:33:08 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:39:13.938 15:33:08 -- common/autobuild_common.sh@447 -- $ date +%s 00:39:13.938 15:33:08 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721748788.XXXXXX 00:39:13.938 15:33:08 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721748788.4ax9Ye 00:39:13.938 15:33:08 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:39:13.938 15:33:08 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:39:13.938 15:33:08 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:39:13.938 15:33:08 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:39:13.938 15:33:08 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:39:13.938 15:33:08 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:39:13.938 15:33:08 -- common/autobuild_common.sh@463 -- $ get_config_params 00:39:13.938 15:33:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:39:13.938 15:33:08 -- common/autotest_common.sh@10 -- $ set +x 00:39:13.938 15:33:08 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:39:13.938 15:33:08 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:39:13.938 15:33:08 -- pm/common@17 -- $ local monitor 00:39:13.938 15:33:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:13.938 15:33:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:13.938 15:33:08 -- pm/common@25 -- $ sleep 1 00:39:13.938 15:33:08 -- pm/common@21 -- $ date +%s 00:39:13.938 15:33:08 -- pm/common@21 -- $ date +%s 00:39:13.938 15:33:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721748788 00:39:13.938 15:33:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721748788 00:39:13.938 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721748788_collect-vmstat.pm.log 00:39:13.938 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721748788_collect-cpu-load.pm.log 00:39:14.507 15:33:09 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:39:14.507 15:33:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:39:14.507 15:33:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:39:14.507 15:33:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:14.507 15:33:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:14.507 15:33:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:14.507 15:33:09 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:39:14.507 15:33:09 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:39:14.507 15:33:09 -- common/autotest_common.sh@10 -- $ set +x 00:39:14.507 15:33:09 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:39:14.507 15:33:09 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:39:14.507 15:33:09 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:39:14.507 15:33:09 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:39:14.507 15:33:09 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:14.507 15:33:09 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:14.507 15:33:09 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:39:14.507 15:33:09 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:39:14.507 15:33:09 -- spdk/autopackage.sh@40 -- $ get_config_params 00:39:14.507 15:33:09 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:39:14.507 15:33:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:39:14.507 15:33:09 -- common/autotest_common.sh@10 -- $ set +x 00:39:14.767 15:33:09 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:39:14.767 15:33:09 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto --disable-unit-tests 00:39:14.767 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:39:14.767 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:39:14.767 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:39:14.767 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:15.026 Using 'verbs' RDMA provider 00:39:28.163 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:39:40.378 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:39:40.378 Creating mk/config.mk...done. 00:39:40.378 Creating mk/cc.flags.mk...done. 00:39:40.378 Type 'make' to build. 00:39:40.378 15:33:35 -- spdk/autopackage.sh@43 -- $ make -j10 00:39:40.945 make[1]: Nothing to be done for 'all'. 00:39:41.204 CC lib/ut_mock/mock.o 00:39:41.204 CC lib/ut/ut.o 00:39:41.204 CC lib/log/log.o 00:39:41.204 CC lib/log/log_flags.o 00:39:41.204 CC lib/log/log_deprecated.o 00:39:41.462 LIB libspdk_ut_mock.a 00:39:41.462 LIB libspdk_ut.a 00:39:41.462 LIB libspdk_log.a 00:39:41.720 CC lib/ioat/ioat.o 00:39:41.720 CXX lib/trace_parser/trace.o 00:39:41.720 CC lib/dma/dma.o 00:39:41.720 CC lib/util/base64.o 00:39:41.720 CC lib/util/bit_array.o 00:39:41.720 CC lib/util/cpuset.o 00:39:41.720 CC lib/util/crc16.o 00:39:41.720 CC lib/util/crc32.o 00:39:41.720 CC lib/util/crc32c.o 00:39:41.720 CC lib/vfio_user/host/vfio_user_pci.o 00:39:41.720 CC lib/util/crc32_ieee.o 00:39:41.720 CC lib/util/crc64.o 00:39:41.979 CC lib/util/dif.o 00:39:41.979 CC lib/util/fd.o 00:39:41.979 LIB libspdk_dma.a 00:39:41.979 CC lib/vfio_user/host/vfio_user.o 00:39:41.979 CC lib/util/fd_group.o 00:39:41.979 CC lib/util/file.o 00:39:41.979 LIB libspdk_ioat.a 00:39:41.979 CC lib/util/hexlify.o 00:39:41.979 CC lib/util/iov.o 00:39:41.979 CC lib/util/math.o 00:39:41.979 CC lib/util/net.o 00:39:41.979 CC lib/util/pipe.o 00:39:41.979 CC lib/util/strerror_tls.o 00:39:41.979 LIB libspdk_vfio_user.a 00:39:41.979 CC lib/util/string.o 00:39:42.236 CC lib/util/uuid.o 00:39:42.236 CC lib/util/xor.o 00:39:42.236 CC lib/util/zipf.o 00:39:42.236 LIB libspdk_util.a 00:39:42.236 LIB libspdk_trace_parser.a 00:39:42.495 CC lib/json/json_util.o 00:39:42.495 CC lib/json/json_parse.o 00:39:42.495 CC lib/idxd/idxd.o 00:39:42.495 CC lib/idxd/idxd_user.o 00:39:42.495 CC lib/idxd/idxd_kernel.o 00:39:42.495 CC lib/rdma_utils/rdma_utils.o 00:39:42.495 CC lib/conf/conf.o 00:39:42.495 CC lib/rdma_provider/common.o 00:39:42.495 CC lib/vmd/vmd.o 00:39:42.495 CC lib/env_dpdk/env.o 00:39:42.495 CC lib/env_dpdk/memory.o 00:39:42.754 CC lib/rdma_provider/rdma_provider_verbs.o 00:39:42.754 CC lib/json/json_write.o 00:39:42.754 CC lib/env_dpdk/pci.o 00:39:42.754 CC lib/vmd/led.o 00:39:42.754 LIB libspdk_conf.a 00:39:42.754 CC lib/env_dpdk/init.o 00:39:42.754 LIB libspdk_rdma_utils.a 00:39:42.754 CC lib/env_dpdk/threads.o 00:39:42.754 CC lib/env_dpdk/pci_ioat.o 00:39:42.754 LIB libspdk_rdma_provider.a 00:39:42.754 LIB libspdk_idxd.a 00:39:42.754 CC lib/env_dpdk/pci_virtio.o 00:39:42.754 CC lib/env_dpdk/pci_vmd.o 00:39:42.754 CC lib/env_dpdk/pci_idxd.o 00:39:42.754 LIB libspdk_vmd.a 00:39:42.754 LIB libspdk_json.a 00:39:42.754 CC lib/env_dpdk/pci_event.o 00:39:42.754 CC lib/env_dpdk/sigbus_handler.o 00:39:43.012 CC lib/env_dpdk/pci_dpdk.o 00:39:43.012 CC lib/env_dpdk/pci_dpdk_2207.o 00:39:43.012 CC lib/env_dpdk/pci_dpdk_2211.o 00:39:43.012 CC lib/jsonrpc/jsonrpc_server.o 00:39:43.012 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:39:43.012 CC lib/jsonrpc/jsonrpc_client.o 00:39:43.012 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:39:43.270 LIB libspdk_jsonrpc.a 00:39:43.838 CC lib/rpc/rpc.o 00:39:43.838 LIB libspdk_env_dpdk.a 00:39:43.838 LIB libspdk_rpc.a 00:39:44.096 CC lib/keyring/keyring.o 00:39:44.096 CC lib/keyring/keyring_rpc.o 00:39:44.096 CC lib/trace/trace_flags.o 00:39:44.096 CC lib/trace/trace.o 00:39:44.096 CC lib/trace/trace_rpc.o 00:39:44.096 CC lib/notify/notify.o 00:39:44.096 CC lib/notify/notify_rpc.o 00:39:44.355 LIB libspdk_keyring.a 00:39:44.355 LIB libspdk_notify.a 00:39:44.355 LIB libspdk_trace.a 00:39:44.613 CC lib/thread/thread.o 00:39:44.613 CC lib/thread/iobuf.o 00:39:44.614 CC lib/sock/sock.o 00:39:44.614 CC lib/sock/sock_rpc.o 00:39:44.871 LIB libspdk_sock.a 00:39:45.153 CC lib/nvme/nvme_ctrlr.o 00:39:45.153 CC lib/nvme/nvme_ctrlr_cmd.o 00:39:45.153 CC lib/nvme/nvme_ns.o 00:39:45.153 CC lib/nvme/nvme_ns_cmd.o 00:39:45.153 CC lib/nvme/nvme_qpair.o 00:39:45.153 CC lib/nvme/nvme.o 00:39:45.153 CC lib/nvme/nvme_fabric.o 00:39:45.153 CC lib/nvme/nvme_pcie.o 00:39:45.153 CC lib/nvme/nvme_pcie_common.o 00:39:45.425 LIB libspdk_thread.a 00:39:45.425 CC lib/nvme/nvme_quirks.o 00:39:45.991 CC lib/nvme/nvme_transport.o 00:39:45.991 CC lib/nvme/nvme_discovery.o 00:39:45.991 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:39:45.991 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:39:45.991 CC lib/nvme/nvme_tcp.o 00:39:45.991 CC lib/nvme/nvme_opal.o 00:39:45.991 CC lib/nvme/nvme_io_msg.o 00:39:46.250 CC lib/nvme/nvme_poll_group.o 00:39:46.250 CC lib/nvme/nvme_zns.o 00:39:46.508 CC lib/nvme/nvme_stubs.o 00:39:46.508 CC lib/nvme/nvme_auth.o 00:39:46.508 CC lib/nvme/nvme_cuse.o 00:39:46.508 CC lib/nvme/nvme_rdma.o 00:39:46.767 CC lib/accel/accel.o 00:39:46.767 CC lib/blob/blobstore.o 00:39:46.767 CC lib/init/json_config.o 00:39:46.767 CC lib/blob/request.o 00:39:46.767 CC lib/virtio/virtio.o 00:39:47.025 CC lib/virtio/virtio_vhost_user.o 00:39:47.025 CC lib/init/subsystem.o 00:39:47.025 CC lib/blob/zeroes.o 00:39:47.025 CC lib/blob/blob_bs_dev.o 00:39:47.025 CC lib/virtio/virtio_vfio_user.o 00:39:47.283 CC lib/init/subsystem_rpc.o 00:39:47.283 CC lib/init/rpc.o 00:39:47.283 CC lib/virtio/virtio_pci.o 00:39:47.283 CC lib/accel/accel_rpc.o 00:39:47.283 CC lib/accel/accel_sw.o 00:39:47.283 LIB libspdk_init.a 00:39:47.541 LIB libspdk_virtio.a 00:39:47.541 LIB libspdk_accel.a 00:39:47.541 CC lib/event/app.o 00:39:47.541 CC lib/event/reactor.o 00:39:47.541 CC lib/event/log_rpc.o 00:39:47.541 CC lib/event/scheduler_static.o 00:39:47.541 CC lib/event/app_rpc.o 00:39:47.541 LIB libspdk_nvme.a 00:39:47.800 CC lib/bdev/bdev.o 00:39:47.800 CC lib/bdev/part.o 00:39:47.800 CC lib/bdev/scsi_nvme.o 00:39:47.800 CC lib/bdev/bdev_zone.o 00:39:47.800 CC lib/bdev/bdev_rpc.o 00:39:47.800 LIB libspdk_event.a 00:39:48.369 LIB libspdk_blob.a 00:39:48.627 CC lib/blobfs/blobfs.o 00:39:48.627 CC lib/blobfs/tree.o 00:39:48.627 CC lib/lvol/lvol.o 00:39:49.194 LIB libspdk_bdev.a 00:39:49.194 LIB libspdk_blobfs.a 00:39:49.194 CC lib/scsi/dev.o 00:39:49.194 CC lib/scsi/lun.o 00:39:49.194 CC lib/scsi/port.o 00:39:49.194 CC lib/scsi/scsi.o 00:39:49.452 CC lib/scsi/scsi_bdev.o 00:39:49.452 CC lib/nbd/nbd.o 00:39:49.452 LIB libspdk_lvol.a 00:39:49.452 CC lib/nvmf/ctrlr.o 00:39:49.452 CC lib/ublk/ublk.o 00:39:49.452 CC lib/ftl/ftl_core.o 00:39:49.452 CC lib/nvmf/ctrlr_discovery.o 00:39:49.452 CC lib/nvmf/ctrlr_bdev.o 00:39:49.452 CC lib/scsi/scsi_pr.o 00:39:49.452 CC lib/scsi/scsi_rpc.o 00:39:49.452 CC lib/scsi/task.o 00:39:49.711 CC lib/nbd/nbd_rpc.o 00:39:49.711 CC lib/nvmf/subsystem.o 00:39:49.711 CC lib/nvmf/nvmf.o 00:39:49.711 CC lib/ftl/ftl_init.o 00:39:49.711 CC lib/ftl/ftl_layout.o 00:39:49.711 CC lib/ftl/ftl_debug.o 00:39:49.711 LIB libspdk_scsi.a 00:39:49.711 CC lib/nvmf/nvmf_rpc.o 00:39:49.711 LIB libspdk_nbd.a 00:39:49.711 CC lib/nvmf/transport.o 00:39:49.711 CC lib/ublk/ublk_rpc.o 00:39:49.970 CC lib/ftl/ftl_io.o 00:39:49.970 CC lib/ftl/ftl_sb.o 00:39:49.970 CC lib/iscsi/conn.o 00:39:49.970 LIB libspdk_ublk.a 00:39:49.970 CC lib/vhost/vhost.o 00:39:49.970 CC lib/ftl/ftl_l2p.o 00:39:49.970 CC lib/ftl/ftl_l2p_flat.o 00:39:49.970 CC lib/ftl/ftl_nv_cache.o 00:39:49.970 CC lib/ftl/ftl_band.o 00:39:49.970 CC lib/iscsi/init_grp.o 00:39:50.228 CC lib/iscsi/iscsi.o 00:39:50.228 CC lib/nvmf/tcp.o 00:39:50.228 CC lib/nvmf/stubs.o 00:39:50.228 CC lib/ftl/ftl_band_ops.o 00:39:50.228 CC lib/iscsi/md5.o 00:39:50.228 CC lib/iscsi/param.o 00:39:50.228 CC lib/iscsi/portal_grp.o 00:39:50.228 CC lib/ftl/ftl_writer.o 00:39:50.228 CC lib/ftl/ftl_rq.o 00:39:50.487 CC lib/iscsi/tgt_node.o 00:39:50.487 CC lib/iscsi/iscsi_subsystem.o 00:39:50.487 CC lib/ftl/ftl_reloc.o 00:39:50.487 CC lib/ftl/ftl_l2p_cache.o 00:39:50.487 CC lib/ftl/ftl_p2l.o 00:39:50.487 CC lib/vhost/vhost_rpc.o 00:39:50.746 CC lib/vhost/vhost_scsi.o 00:39:50.746 CC lib/ftl/mngt/ftl_mngt.o 00:39:50.746 CC lib/iscsi/iscsi_rpc.o 00:39:50.746 CC lib/iscsi/task.o 00:39:50.746 CC lib/vhost/vhost_blk.o 00:39:50.746 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:39:50.746 CC lib/vhost/rte_vhost_user.o 00:39:50.746 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:39:50.746 CC lib/ftl/mngt/ftl_mngt_startup.o 00:39:50.746 CC lib/nvmf/mdns_server.o 00:39:51.005 CC lib/ftl/mngt/ftl_mngt_md.o 00:39:51.005 CC lib/nvmf/rdma.o 00:39:51.005 LIB libspdk_iscsi.a 00:39:51.005 CC lib/ftl/mngt/ftl_mngt_misc.o 00:39:51.005 CC lib/nvmf/auth.o 00:39:51.005 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:39:51.005 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:39:51.005 CC lib/ftl/mngt/ftl_mngt_band.o 00:39:51.263 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:39:51.263 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:39:51.263 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:39:51.263 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:39:51.263 CC lib/ftl/utils/ftl_conf.o 00:39:51.263 CC lib/ftl/utils/ftl_md.o 00:39:51.263 CC lib/ftl/utils/ftl_mempool.o 00:39:51.263 CC lib/ftl/utils/ftl_bitmap.o 00:39:51.263 CC lib/ftl/utils/ftl_property.o 00:39:51.522 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:39:51.522 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:39:51.522 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:39:51.522 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:39:51.522 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:39:51.522 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:39:51.522 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:39:51.522 CC lib/ftl/upgrade/ftl_sb_v3.o 00:39:51.522 CC lib/ftl/upgrade/ftl_sb_v5.o 00:39:51.781 CC lib/ftl/nvc/ftl_nvc_dev.o 00:39:51.781 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:39:51.781 CC lib/ftl/base/ftl_base_dev.o 00:39:51.781 CC lib/ftl/base/ftl_base_bdev.o 00:39:51.781 LIB libspdk_vhost.a 00:39:51.781 LIB libspdk_ftl.a 00:39:51.781 LIB libspdk_nvmf.a 00:39:52.350 CC module/env_dpdk/env_dpdk_rpc.o 00:39:52.350 CC module/accel/error/accel_error.o 00:39:52.350 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:39:52.350 CC module/scheduler/dynamic/scheduler_dynamic.o 00:39:52.350 CC module/accel/ioat/accel_ioat.o 00:39:52.350 CC module/accel/dsa/accel_dsa.o 00:39:52.350 CC module/scheduler/gscheduler/gscheduler.o 00:39:52.350 CC module/sock/posix/posix.o 00:39:52.350 CC module/blob/bdev/blob_bdev.o 00:39:52.350 CC module/keyring/file/keyring.o 00:39:52.350 LIB libspdk_env_dpdk_rpc.a 00:39:52.350 CC module/keyring/file/keyring_rpc.o 00:39:52.350 LIB libspdk_scheduler_dpdk_governor.a 00:39:52.350 LIB libspdk_scheduler_gscheduler.a 00:39:52.350 CC module/accel/error/accel_error_rpc.o 00:39:52.350 CC module/accel/dsa/accel_dsa_rpc.o 00:39:52.350 LIB libspdk_scheduler_dynamic.a 00:39:52.350 CC module/accel/ioat/accel_ioat_rpc.o 00:39:52.609 LIB libspdk_keyring_file.a 00:39:52.609 LIB libspdk_blob_bdev.a 00:39:52.609 LIB libspdk_accel_dsa.a 00:39:52.609 LIB libspdk_accel_ioat.a 00:39:52.609 LIB libspdk_accel_error.a 00:39:52.609 CC module/accel/iaa/accel_iaa.o 00:39:52.609 CC module/accel/iaa/accel_iaa_rpc.o 00:39:52.609 CC module/keyring/linux/keyring.o 00:39:52.609 CC module/keyring/linux/keyring_rpc.o 00:39:52.609 LIB libspdk_keyring_linux.a 00:39:52.609 CC module/bdev/gpt/gpt.o 00:39:52.609 CC module/bdev/error/vbdev_error.o 00:39:52.609 CC module/bdev/lvol/vbdev_lvol.o 00:39:52.609 CC module/blobfs/bdev/blobfs_bdev.o 00:39:52.609 CC module/bdev/delay/vbdev_delay.o 00:39:52.609 CC module/bdev/error/vbdev_error_rpc.o 00:39:52.609 LIB libspdk_accel_iaa.a 00:39:52.868 LIB libspdk_sock_posix.a 00:39:52.868 CC module/bdev/malloc/bdev_malloc.o 00:39:52.868 CC module/bdev/null/bdev_null.o 00:39:52.868 CC module/bdev/malloc/bdev_malloc_rpc.o 00:39:52.868 CC module/bdev/gpt/vbdev_gpt.o 00:39:52.868 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:39:52.868 CC module/bdev/nvme/bdev_nvme.o 00:39:52.868 CC module/bdev/passthru/vbdev_passthru.o 00:39:52.868 LIB libspdk_bdev_error.a 00:39:52.868 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:39:52.868 CC module/bdev/delay/vbdev_delay_rpc.o 00:39:53.127 CC module/bdev/nvme/bdev_nvme_rpc.o 00:39:53.127 LIB libspdk_blobfs_bdev.a 00:39:53.127 CC module/bdev/null/bdev_null_rpc.o 00:39:53.127 CC module/bdev/nvme/nvme_rpc.o 00:39:53.127 CC module/bdev/nvme/bdev_mdns_client.o 00:39:53.128 LIB libspdk_bdev_gpt.a 00:39:53.128 LIB libspdk_bdev_malloc.a 00:39:53.128 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:39:53.128 LIB libspdk_bdev_delay.a 00:39:53.128 LIB libspdk_bdev_null.a 00:39:53.128 LIB libspdk_bdev_lvol.a 00:39:53.128 CC module/bdev/nvme/vbdev_opal.o 00:39:53.128 CC module/bdev/nvme/vbdev_opal_rpc.o 00:39:53.128 CC module/bdev/raid/bdev_raid.o 00:39:53.387 CC module/bdev/split/vbdev_split.o 00:39:53.387 LIB libspdk_bdev_passthru.a 00:39:53.387 CC module/bdev/zone_block/vbdev_zone_block.o 00:39:53.387 CC module/bdev/aio/bdev_aio.o 00:39:53.387 CC module/bdev/ftl/bdev_ftl.o 00:39:53.387 CC module/bdev/ftl/bdev_ftl_rpc.o 00:39:53.387 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:39:53.387 CC module/bdev/split/vbdev_split_rpc.o 00:39:53.387 CC module/bdev/iscsi/bdev_iscsi.o 00:39:53.387 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:39:53.645 CC module/bdev/raid/bdev_raid_rpc.o 00:39:53.645 LIB libspdk_bdev_zone_block.a 00:39:53.645 LIB libspdk_bdev_split.a 00:39:53.645 CC module/bdev/raid/bdev_raid_sb.o 00:39:53.645 CC module/bdev/raid/raid0.o 00:39:53.645 CC module/bdev/raid/raid1.o 00:39:53.645 CC module/bdev/raid/concat.o 00:39:53.645 LIB libspdk_bdev_ftl.a 00:39:53.645 CC module/bdev/aio/bdev_aio_rpc.o 00:39:53.645 CC module/bdev/raid/raid5f.o 00:39:53.645 LIB libspdk_bdev_iscsi.a 00:39:53.645 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:39:53.645 LIB libspdk_bdev_aio.a 00:39:53.904 CC module/bdev/virtio/bdev_virtio_scsi.o 00:39:53.904 CC module/bdev/virtio/bdev_virtio_blk.o 00:39:53.904 CC module/bdev/virtio/bdev_virtio_rpc.o 00:39:53.904 LIB libspdk_bdev_raid.a 00:39:54.162 LIB libspdk_bdev_nvme.a 00:39:54.162 LIB libspdk_bdev_virtio.a 00:39:54.730 CC module/event/subsystems/iobuf/iobuf.o 00:39:54.730 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:39:54.730 CC module/event/subsystems/vmd/vmd.o 00:39:54.730 CC module/event/subsystems/vmd/vmd_rpc.o 00:39:54.730 CC module/event/subsystems/keyring/keyring.o 00:39:54.730 CC module/event/subsystems/sock/sock.o 00:39:54.730 CC module/event/subsystems/scheduler/scheduler.o 00:39:54.730 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:39:54.730 LIB libspdk_event_keyring.a 00:39:54.730 LIB libspdk_event_vmd.a 00:39:54.730 LIB libspdk_event_scheduler.a 00:39:54.730 LIB libspdk_event_iobuf.a 00:39:54.730 LIB libspdk_event_sock.a 00:39:54.730 LIB libspdk_event_vhost_blk.a 00:39:54.988 CC module/event/subsystems/accel/accel.o 00:39:54.988 LIB libspdk_event_accel.a 00:39:55.578 CC module/event/subsystems/bdev/bdev.o 00:39:55.578 LIB libspdk_event_bdev.a 00:39:55.848 CC module/event/subsystems/ublk/ublk.o 00:39:55.848 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:39:55.848 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:39:55.848 CC module/event/subsystems/scsi/scsi.o 00:39:55.848 CC module/event/subsystems/nbd/nbd.o 00:39:55.848 LIB libspdk_event_nbd.a 00:39:55.848 LIB libspdk_event_ublk.a 00:39:56.106 LIB libspdk_event_scsi.a 00:39:56.106 LIB libspdk_event_nvmf.a 00:39:56.363 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:39:56.363 CC module/event/subsystems/iscsi/iscsi.o 00:39:56.363 LIB libspdk_event_vhost_scsi.a 00:39:56.363 LIB libspdk_event_iscsi.a 00:39:56.622 CXX app/trace/trace.o 00:39:56.622 CC app/trace_record/trace_record.o 00:39:56.622 CC app/spdk_nvme_identify/identify.o 00:39:56.622 CC app/spdk_lspci/spdk_lspci.o 00:39:56.622 CC app/spdk_nvme_perf/perf.o 00:39:56.622 CC app/nvmf_tgt/nvmf_main.o 00:39:56.879 CC app/iscsi_tgt/iscsi_tgt.o 00:39:56.879 CC app/spdk_tgt/spdk_tgt.o 00:39:56.879 CC examples/util/zipf/zipf.o 00:39:56.879 CC test/thread/poller_perf/poller_perf.o 00:39:56.879 LINK spdk_lspci 00:39:56.879 LINK spdk_trace_record 00:39:56.879 LINK nvmf_tgt 00:39:57.137 LINK zipf 00:39:57.137 LINK poller_perf 00:39:57.137 LINK iscsi_tgt 00:39:57.137 LINK spdk_tgt 00:39:57.137 LINK spdk_trace 00:39:57.394 LINK spdk_nvme_identify 00:39:57.394 LINK spdk_nvme_perf 00:39:58.768 CC examples/ioat/perf/perf.o 00:39:59.333 LINK ioat_perf 00:40:00.707 CC examples/ioat/verify/verify.o 00:40:01.273 LINK verify 00:40:05.461 CC examples/vmd/lsvmd/lsvmd.o 00:40:05.461 CC test/thread/lock/spdk_lock.o 00:40:05.461 LINK lsvmd 00:40:08.749 CC examples/idxd/perf/perf.o 00:40:09.688 LINK idxd_perf 00:40:10.301 LINK spdk_lock 00:40:11.680 CC examples/interrupt_tgt/interrupt_tgt.o 00:40:13.060 LINK interrupt_tgt 00:40:19.631 CC app/spdk_nvme_discover/discovery_aer.o 00:40:20.199 LINK spdk_nvme_discover 00:40:25.473 CC app/spdk_top/spdk_top.o 00:40:25.473 CC examples/thread/thread/thread_ex.o 00:40:25.473 CC examples/sock/hello_world/hello_sock.o 00:40:26.846 LINK thread 00:40:26.846 LINK hello_sock 00:40:28.265 LINK spdk_top 00:40:29.644 CC test/dma/test_dma/test_dma.o 00:40:31.551 CC examples/vmd/led/led.o 00:40:31.551 LINK test_dma 00:40:32.489 LINK led 00:40:50.581 CC app/vhost/vhost.o 00:40:50.581 LINK vhost 00:40:50.581 CC test/app/bdev_svc/bdev_svc.o 00:40:50.581 LINK bdev_svc 00:40:53.117 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:40:55.658 LINK nvme_fuzz 00:41:05.645 CC examples/nvme/hello_world/hello_world.o 00:41:06.213 LINK hello_world 00:41:28.148 CC examples/nvme/reconnect/reconnect.o 00:41:30.072 LINK reconnect 00:41:48.154 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:41:50.688 CC examples/nvme/nvme_manage/nvme_manage.o 00:41:54.008 LINK nvme_manage 00:41:54.576 LINK iscsi_fuzz 00:42:16.515 CC examples/nvme/arbitration/arbitration.o 00:42:16.515 LINK arbitration 00:42:21.784 CC app/spdk_dd/spdk_dd.o 00:42:23.686 LINK spdk_dd 00:42:31.807 TEST_HEADER include/spdk/config.h 00:42:31.807 CXX test/cpp_headers/accel.o 00:42:32.745 CXX test/cpp_headers/accel_module.o 00:42:34.125 CXX test/cpp_headers/assert.o 00:42:35.063 CXX test/cpp_headers/barrier.o 00:42:35.632 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:42:36.201 CXX test/cpp_headers/base64.o 00:42:36.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:42:37.138 CXX test/cpp_headers/bdev.o 00:42:38.518 LINK vhost_fuzz 00:42:38.518 CXX test/cpp_headers/bdev_module.o 00:42:39.085 CC examples/nvme/hotplug/hotplug.o 00:42:39.653 CXX test/cpp_headers/bdev_zone.o 00:42:40.223 LINK hotplug 00:42:40.792 CXX test/cpp_headers/bit_array.o 00:42:41.744 CXX test/cpp_headers/bit_pool.o 00:42:42.721 CXX test/cpp_headers/blob.o 00:42:43.657 CXX test/cpp_headers/blob_bdev.o 00:42:44.593 CXX test/cpp_headers/blobfs.o 00:42:44.593 CC app/fio/nvme/fio_plugin.o 00:42:45.969 CXX test/cpp_headers/blobfs_bdev.o 00:42:46.905 CXX test/cpp_headers/conf.o 00:42:47.163 LINK spdk_nvme 00:42:47.422 CC examples/nvme/cmb_copy/cmb_copy.o 00:42:47.991 CXX test/cpp_headers/config.o 00:42:48.249 CXX test/cpp_headers/cpuset.o 00:42:48.506 LINK cmb_copy 00:42:49.443 CXX test/cpp_headers/crc16.o 00:42:49.443 CC test/env/mem_callbacks/mem_callbacks.o 00:42:50.380 CXX test/cpp_headers/crc32.o 00:42:50.380 LINK mem_callbacks 00:42:50.380 CC app/fio/bdev/fio_plugin.o 00:42:51.317 CXX test/cpp_headers/crc64.o 00:42:52.254 CXX test/cpp_headers/dif.o 00:42:52.513 LINK spdk_bdev 00:42:53.081 CXX test/cpp_headers/dma.o 00:42:54.459 CXX test/cpp_headers/endian.o 00:42:54.459 CC test/env/vtophys/vtophys.o 00:42:55.396 CC test/app/histogram_perf/histogram_perf.o 00:42:55.655 LINK vtophys 00:42:55.655 CXX test/cpp_headers/env.o 00:42:56.645 LINK histogram_perf 00:42:57.213 CXX test/cpp_headers/env_dpdk.o 00:42:58.589 CXX test/cpp_headers/event.o 00:42:59.524 CXX test/cpp_headers/fd.o 00:43:00.901 CXX test/cpp_headers/fd_group.o 00:43:01.839 CXX test/cpp_headers/file.o 00:43:02.776 CXX test/cpp_headers/ftl.o 00:43:04.154 CXX test/cpp_headers/gpt_spec.o 00:43:05.090 CXX test/cpp_headers/hexlify.o 00:43:05.656 CXX test/cpp_headers/histogram_data.o 00:43:06.223 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:43:06.481 CXX test/cpp_headers/idxd.o 00:43:06.739 LINK env_dpdk_post_init 00:43:07.306 CXX test/cpp_headers/idxd_spec.o 00:43:07.872 CXX test/cpp_headers/init.o 00:43:08.440 CXX test/cpp_headers/ioat.o 00:43:08.440 CC test/app/jsoncat/jsoncat.o 00:43:08.697 LINK jsoncat 00:43:08.697 CC test/event/event_perf/event_perf.o 00:43:08.954 CC test/app/stub/stub.o 00:43:08.954 LINK event_perf 00:43:08.954 CXX test/cpp_headers/ioat_spec.o 00:43:09.518 LINK stub 00:43:09.776 CXX test/cpp_headers/iscsi_spec.o 00:43:10.342 CXX test/cpp_headers/json.o 00:43:11.292 CXX test/cpp_headers/jsonrpc.o 00:43:12.245 CXX test/cpp_headers/keyring.o 00:43:12.503 CC test/env/memory/memory_ut.o 00:43:13.069 CXX test/cpp_headers/keyring_module.o 00:43:14.003 CXX test/cpp_headers/likely.o 00:43:15.377 CC examples/nvme/abort/abort.o 00:43:15.377 CXX test/cpp_headers/log.o 00:43:15.635 LINK memory_ut 00:43:16.568 CXX test/cpp_headers/lvol.o 00:43:17.133 LINK abort 00:43:17.699 CXX test/cpp_headers/memory.o 00:43:18.632 CXX test/cpp_headers/mmio.o 00:43:19.566 CXX test/cpp_headers/nbd.o 00:43:19.825 CXX test/cpp_headers/net.o 00:43:20.770 CXX test/cpp_headers/notify.o 00:43:21.725 CXX test/cpp_headers/nvme.o 00:43:21.983 CC examples/accel/perf/accel_perf.o 00:43:23.360 CXX test/cpp_headers/nvme_intel.o 00:43:23.619 CXX test/cpp_headers/nvme_ocssd.o 00:43:23.878 LINK accel_perf 00:43:24.815 CC test/event/reactor/reactor.o 00:43:24.815 CXX test/cpp_headers/nvme_ocssd_spec.o 00:43:25.383 CC test/event/reactor_perf/reactor_perf.o 00:43:25.641 LINK reactor 00:43:25.901 CXX test/cpp_headers/nvme_spec.o 00:43:26.468 LINK reactor_perf 00:43:27.036 CXX test/cpp_headers/nvme_zns.o 00:43:28.414 CXX test/cpp_headers/nvmf.o 00:43:29.791 CXX test/cpp_headers/nvmf_cmd.o 00:43:31.165 CXX test/cpp_headers/nvmf_fc_spec.o 00:43:32.560 CC test/event/app_repeat/app_repeat.o 00:43:32.560 CXX test/cpp_headers/nvmf_spec.o 00:43:33.495 LINK app_repeat 00:43:34.061 CXX test/cpp_headers/nvmf_transport.o 00:43:35.963 CXX test/cpp_headers/opal.o 00:43:37.869 CC test/env/pci/pci_ut.o 00:43:37.869 CXX test/cpp_headers/opal_spec.o 00:43:39.248 LINK pci_ut 00:43:39.248 CXX test/cpp_headers/pci_ids.o 00:43:40.627 CXX test/cpp_headers/pipe.o 00:43:41.565 CXX test/cpp_headers/queue.o 00:43:41.824 CXX test/cpp_headers/reduce.o 00:43:43.203 CXX test/cpp_headers/rpc.o 00:43:43.203 CXX test/cpp_headers/scheduler.o 00:43:44.605 CXX test/cpp_headers/scsi.o 00:43:44.605 CC test/event/scheduler/scheduler.o 00:43:45.542 CXX test/cpp_headers/scsi_spec.o 00:43:45.542 LINK scheduler 00:43:45.542 CXX test/cpp_headers/sock.o 00:43:46.479 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:43:46.738 CXX test/cpp_headers/stdinc.o 00:43:46.738 CC test/nvme/aer/aer.o 00:43:47.306 CXX test/cpp_headers/string.o 00:43:47.565 LINK pmr_persistence 00:43:47.565 CC test/rpc_client/rpc_client_test.o 00:43:47.823 LINK aer 00:43:48.082 CXX test/cpp_headers/thread.o 00:43:48.341 LINK rpc_client_test 00:43:48.598 CXX test/cpp_headers/trace.o 00:43:49.163 CXX test/cpp_headers/trace_parser.o 00:43:49.422 CC examples/blob/hello_world/hello_blob.o 00:43:49.990 CXX test/cpp_headers/tree.o 00:43:49.990 CXX test/cpp_headers/ublk.o 00:43:50.249 LINK hello_blob 00:43:50.815 CXX test/cpp_headers/util.o 00:43:51.750 CXX test/cpp_headers/uuid.o 00:43:52.009 CC examples/bdev/hello_world/hello_bdev.o 00:43:52.943 CXX test/cpp_headers/version.o 00:43:52.943 CXX test/cpp_headers/vfio_user_pci.o 00:43:53.202 LINK hello_bdev 00:43:54.137 CXX test/cpp_headers/vfio_user_spec.o 00:43:55.095 CXX test/cpp_headers/vhost.o 00:43:56.469 CXX test/cpp_headers/vmd.o 00:43:57.403 CXX test/cpp_headers/xor.o 00:43:58.339 CXX test/cpp_headers/zipf.o 00:44:00.242 CC examples/bdev/bdevperf/bdevperf.o 00:44:01.618 CC test/accel/dif/dif.o 00:44:02.997 LINK bdevperf 00:44:03.563 LINK dif 00:44:06.093 CC test/blobfs/mkfs/mkfs.o 00:44:07.030 LINK mkfs 00:44:08.933 CC examples/blob/cli/blobcli.o 00:44:11.464 LINK blobcli 00:44:13.998 CC test/nvme/reset/reset.o 00:44:14.936 CC test/nvme/sgl/sgl.o 00:44:14.936 LINK reset 00:44:15.873 LINK sgl 00:44:54.592 CC test/lvol/esnap/esnap.o 00:44:59.859 CC test/nvme/e2edp/nvme_dp.o 00:45:00.797 CC test/nvme/overhead/overhead.o 00:45:00.797 LINK nvme_dp 00:45:03.329 LINK overhead 00:45:21.413 LINK esnap 00:45:47.983 CC test/nvme/err_injection/err_injection.o 00:45:47.983 LINK err_injection 00:45:47.983 CC test/nvme/startup/startup.o 00:45:47.983 LINK startup 00:46:06.071 CC test/nvme/reserve/reserve.o 00:46:06.071 CC test/nvme/simple_copy/simple_copy.o 00:46:06.071 LINK simple_copy 00:46:06.071 LINK reserve 00:46:10.262 CC test/nvme/connect_stress/connect_stress.o 00:46:11.199 LINK connect_stress 00:46:16.473 CC test/nvme/boot_partition/boot_partition.o 00:46:17.409 LINK boot_partition 00:46:22.678 CC test/nvme/compliance/nvme_compliance.o 00:46:22.678 CC test/nvme/fused_ordering/fused_ordering.o 00:46:24.056 LINK fused_ordering 00:46:24.056 LINK nvme_compliance 00:46:27.346 CC test/nvme/doorbell_aers/doorbell_aers.o 00:46:27.346 CC test/nvme/fdp/fdp.o 00:46:28.282 LINK doorbell_aers 00:46:28.849 LINK fdp 00:46:29.787 CC test/nvme/cuse/cuse.o 00:46:30.382 CC examples/nvmf/nvmf/nvmf.o 00:46:31.757 LINK nvmf 00:46:34.290 CC test/bdev/bdevio/bdevio.o 00:46:35.667 LINK cuse 00:46:36.235 LINK bdevio 00:47:22.927 15:41:11 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:47:22.927 make[1]: Nothing to be done for 'clean'. 00:47:22.927 15:41:18 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:47:22.927 15:41:18 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:47:22.927 15:41:18 -- common/autotest_common.sh@10 -- $ set +x 00:47:22.928 15:41:18 -- spdk/autopackage.sh@48 -- $ timing_finish 00:47:22.928 15:41:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:22.928 15:41:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:47:22.928 15:41:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:47:22.928 15:41:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:22.928 15:41:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:22.928 15:41:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:22.928 15:41:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:22.928 15:41:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:47:22.928 15:41:18 -- pm/common@44 -- $ pid=134899 00:47:22.928 15:41:18 -- pm/common@50 -- $ kill -TERM 134899 00:47:22.928 15:41:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:22.928 15:41:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:47:22.928 15:41:18 -- pm/common@44 -- $ pid=134901 00:47:22.928 15:41:18 -- pm/common@50 -- $ kill -TERM 134901 00:47:22.928 + [[ -n 2549 ]] 00:47:22.928 + sudo kill 2549 00:47:23.939 [Pipeline] } 00:47:23.958 [Pipeline] // timeout 00:47:23.964 [Pipeline] } 00:47:23.982 [Pipeline] // stage 00:47:23.987 [Pipeline] } 00:47:24.004 [Pipeline] // catchError 00:47:24.013 [Pipeline] stage 00:47:24.015 [Pipeline] { (Stop VM) 00:47:24.030 [Pipeline] sh 00:47:24.312 + vagrant halt 00:47:27.599 ==> default: Halting domain... 00:47:37.582 [Pipeline] sh 00:47:37.864 + vagrant destroy -f 00:47:41.152 ==> default: Removing domain... 00:47:41.734 [Pipeline] sh 00:47:42.017 + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output 00:47:42.026 [Pipeline] } 00:47:42.044 [Pipeline] // stage 00:47:42.049 [Pipeline] } 00:47:42.066 [Pipeline] // dir 00:47:42.073 [Pipeline] } 00:47:42.089 [Pipeline] // wrap 00:47:42.095 [Pipeline] } 00:47:42.111 [Pipeline] // catchError 00:47:42.120 [Pipeline] stage 00:47:42.123 [Pipeline] { (Epilogue) 00:47:42.138 [Pipeline] sh 00:47:42.422 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:04.383 [Pipeline] catchError 00:48:04.385 [Pipeline] { 00:48:04.399 [Pipeline] sh 00:48:04.681 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:04.939 Artifacts sizes are good 00:48:04.947 [Pipeline] } 00:48:04.964 [Pipeline] // catchError 00:48:04.977 [Pipeline] archiveArtifacts 00:48:04.984 Archiving artifacts 00:48:05.363 [Pipeline] cleanWs 00:48:05.375 [WS-CLEANUP] Deleting project workspace... 00:48:05.375 [WS-CLEANUP] Deferred wipeout is used... 00:48:05.381 [WS-CLEANUP] done 00:48:05.382 [Pipeline] } 00:48:05.396 [Pipeline] // stage 00:48:05.401 [Pipeline] } 00:48:05.415 [Pipeline] // node 00:48:05.420 [Pipeline] End of Pipeline 00:48:05.462 Finished: SUCCESS